[jira] [Updated] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-06-15 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-35473:
--
Release Note: 
As Apache Flink progresses to version 2.0, several table configuration options 
are being deprecated and replaced to improve user-friendliness and 
maintainability.

Deprecated Configuration Options

The following table configuration options are deprecated in this release and 
will be removed in Flink 2.0:

Deprecated Due to TPC Testing Irrelevance

These options were previously used for fine-tuning TPC testing but are no 
longer needed by the current Flink planner:

table.exec.range-sort.enabled
table.optimizer.rows-per-local-agg
table.optimizer.join.null-filter-threshold
table.optimizer.semi-anti-join.build-distinct.ndv-ratio
table.optimizer.shuffle-by-partial-key-enabled
table.optimizer.smj.remove-sort-enabled
table.optimizer.cnf-nodes-limit
Deprecated Due to Legacy Interface

These options were introduced for the now-obsolete FilterableTableSource 
interface:

table.optimizer.source.aggregate-pushdown-enabled
table.optimizer.source.predicate-pushdown-enabled
New and Updated Configuration Options
SQL Client Option
sql-client.display.max-column-width has been replaced with 
table.display.max-column-width.
Batch Execution Options

The following options have been moved from 
org.apache.flink.table.planner.codegen.agg.batch.HashAggCodeGenerator to 
org.apache.flink.table.api.config. and promoted to PublicEvolving:

table.exec.local-hash-agg.adaptive.enabled
table.exec.local-hash-agg.adaptive.sampling-threshold
table.exec.local-hash-agg.adaptive.distinct-value-rate-threshold
Lookup Hint Options

The following options have been moved from 
org.apache.flink.table.planner.hint.LookupJoinHintOptions to 
org.apache.flink.table.api.config.LookupJoinHintOptions and promoted to 
PublicEvolving:

table
async
output-mode
capacity
timeout
retry-predicate
retry-strategy
fixed-delay
max-attempts
Optimizer Options

The following options have been moved from 
org.apache.flink.table.planner.plan.optimize.RelNodeBlock to 
org.apache.flink.table.api.config.OptimizerConfigOptions and promoted to 
PublicEvolving:

table.optimizer.union-all-as-breakpoint-enabled
table.optimizer.reuse-optimize-block-with-digest-enabled
Aggregate Optimizer Option

The following option has been moved from 
org.apache.flink.table.planner.plan.rules.physical.stream.IncrementalAggregateRule
 to org.apache.flink.table.api.config.OptimizerConfigOptions and promoted to 
PublicEvolving:

table.optimizer.incremental-agg-enabled

> FLIP-457: Improve Table/SQL Configuration for Flink 2.0
> ---
>
> Key: FLINK-35473
> URL: https://issues.apache.org/jira/browse/FLINK-35473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> This is the parent task for 
> [FLIP-457|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-06-15 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17855276#comment-17855276
 ] 

Jane Chan commented on FLINK-35473:
---

As Apache Flink progresses to version 2.0, several table configuration options 
are being deprecated and replaced to improve user-friendliness and 
maintainability.
h4. Deprecated Configuration Options

The following table configuration options are deprecated in this release and 
will be removed in Flink 2.0:
h5. Deprecated Due to TPC Testing Irrelevance

These options were previously used for fine-tuning TPC testing but are no 
longer needed by the current Flink planner:
 * {{table.exec.range-sort.enabled}}
 * {{table.optimizer.rows-per-local-agg}}
 * {{table.optimizer.join.null-filter-threshold}}
 * {{table.optimizer.semi-anti-join.build-distinct.ndv-ratio}}
 * {{table.optimizer.shuffle-by-partial-key-enabled}}
 * {{table.optimizer.smj.remove-sort-enabled}}
 * {{table.optimizer.cnf-nodes-limit}}

h5. Deprecated Due to Legacy Interface

These options were introduced for the now-obsolete FilterableTableSource 
interface:
 * {{table.optimizer.source.aggregate-pushdown-enabled}}
 * {{table.optimizer.source.predicate-pushdown-enabled}}

h4. New and Updated Configuration Options
h5. SQL Client Option
 * {{sql-client.display.max-column-width}} has been replaced with 
{{{}table.display.max-column-width{}}}.

h5. Batch Execution Options

The following options have been moved from 
{{org.apache.flink.table.planner.codegen.agg.batch.HashAggCodeGenerator}} to 
{{org.apache.flink.table.api.config.}} and promoted to PublicEvolving:
 * {{table.exec.local-hash-agg.adaptive.enabled}}
 * {{table.exec.local-hash-agg.adaptive.sampling-threshold}}
 * {{table.exec.local-hash-agg.adaptive.distinct-value-rate-threshold}}

h5. Lookup Hint Options

The following options have been moved from 
{{org.apache.flink.table.planner.hint.LookupJoinHintOptions}} to 
{{org.apache.flink.table.api.config.LookupJoinHintOptions}} and promoted to 
PublicEvolving:
 * {{table}}
 * {{async}}
 * {{output-mode}}
 * {{capacity}}
 * {{timeout}}
 * {{retry-predicate}}
 * {{retry-strategy}}
 * {{fixed-delay}}
 * {{max-attempts}}

h5. Optimizer Options

The following options have been moved from 
{{org.apache.flink.table.planner.plan.optimize.RelNodeBlock}} to 
{{org.apache.flink.table.api.config.OptimizerConfigOptions}} and promoted to 
PublicEvolving:
 * {{table.optimizer.union-all-as-breakpoint-enabled}}
 * {{table.optimizer.reuse-optimize-block-with-digest-enabled}}

h5. Aggregate Optimizer Option

The following option has been moved from 
{{org.apache.flink.table.planner.plan.rules.physical.stream.IncrementalAggregateRule}}
 to {{org.apache.flink.table.api.config.OptimizerConfigOptions}} and promoted 
to PublicEvolving:
 * {{table.optimizer.incremental-agg-enabled}}

> FLIP-457: Improve Table/SQL Configuration for Flink 2.0
> ---
>
> Key: FLINK-35473
> URL: https://issues.apache.org/jira/browse/FLINK-35473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> This is the parent task for 
> [FLIP-457|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34914) FLIP-436: Introduce Catalog-related Syntax

2024-06-14 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-34914.
---
Fix Version/s: 1.20.0
   Resolution: Fixed

> FLIP-436: Introduce Catalog-related Syntax
> --
>
> Key: FLINK-34914
> URL: https://issues.apache.org/jira/browse/FLINK-34914
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
> Fix For: 1.20.0
>
>
> Umbrella issue for: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-436%3A+Introduce+Catalog-related+Syntax



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34917) Support `CREATE CATALOG IF NOT EXISTS `with comment

2024-06-14 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-34917.
-

> Support  `CREATE CATALOG IF NOT EXISTS `with comment
> 
>
> Key: FLINK-34917
> URL: https://issues.apache.org/jira/browse/FLINK-34917
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: image-2024-03-22-18-31-59-632.png
>
>
> We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
> reasons are as follows.
> 1. For the sake of design consistency, follow the design of FLIP-295 [1] 
> which introduced `CatalogStore` component, `CatalogDescriptor` includes names 
> and attributes, both of which are used to describe the catalog, and `comment` 
> can be added smoothly.
> 2. Extending the existing class rather than add new method to the existing 
> interface, Especially, the `Catalog` interface, as a core interface, is used 
> by a series of important components such as `CatalogFactory`, 
> `CatalogManager` and `FactoryUtil`, and is implemented by a large number of 
> connectors such as JDBC, Paimon, and Hive. Adding methods to it will greatly 
> increase the implementation complexity, and more importantly, increase the 
> cost of iteration, maintenance, and verification.
>  
> {{IF NOT EXISTS}}  clause: If the catalog already exists, nothing happens.
> {{COMMENT}} clause: An optional string literal. The description for the 
> catalog.
> NOTICE: we just need to introduce the '[IF NOT EXISTS]' and '[COMMENT]' 
> clause to the 'create catalog' statement.
> !image-2024-03-22-18-31-59-632.png|width=795,height=87!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34917) Support `CREATE CATALOG IF NOT EXISTS `with comment

2024-06-14 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-34917.
---
Fix Version/s: 1.20.0
   Resolution: Fixed

Fixed in master 4044b9ea42aa6cf4a638b4ed46219fed94ce84bf

> Support  `CREATE CATALOG IF NOT EXISTS `with comment
> 
>
> Key: FLINK-34917
> URL: https://issues.apache.org/jira/browse/FLINK-34917
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: image-2024-03-22-18-31-59-632.png
>
>
> We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
> reasons are as follows.
> 1. For the sake of design consistency, follow the design of FLIP-295 [1] 
> which introduced `CatalogStore` component, `CatalogDescriptor` includes names 
> and attributes, both of which are used to describe the catalog, and `comment` 
> can be added smoothly.
> 2. Extending the existing class rather than add new method to the existing 
> interface, Especially, the `Catalog` interface, as a core interface, is used 
> by a series of important components such as `CatalogFactory`, 
> `CatalogManager` and `FactoryUtil`, and is implemented by a large number of 
> connectors such as JDBC, Paimon, and Hive. Adding methods to it will greatly 
> increase the implementation complexity, and more importantly, increase the 
> cost of iteration, maintenance, and verification.
>  
> {{IF NOT EXISTS}}  clause: If the catalog already exists, nothing happens.
> {{COMMENT}} clause: An optional string literal. The description for the 
> catalog.
> NOTICE: we just need to introduce the '[IF NOT EXISTS]' and '[COMMENT]' 
> clause to the 'create catalog' statement.
> !image-2024-03-22-18-31-59-632.png|width=795,height=87!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34917) Support `CREATE CATALOG IF NOT EXISTS `with comment

2024-06-14 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-34917:
--
Summary: Support  `CREATE CATALOG IF NOT EXISTS `with comment  (was: 
Introduce comment for CatalogStore & Support enhanced `CREATE CATALOG` syntax)

> Support  `CREATE CATALOG IF NOT EXISTS `with comment
> 
>
> Key: FLINK-34917
> URL: https://issues.apache.org/jira/browse/FLINK-34917
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-03-22-18-31-59-632.png
>
>
> We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
> reasons are as follows.
> 1. For the sake of design consistency, follow the design of FLIP-295 [1] 
> which introduced `CatalogStore` component, `CatalogDescriptor` includes names 
> and attributes, both of which are used to describe the catalog, and `comment` 
> can be added smoothly.
> 2. Extending the existing class rather than add new method to the existing 
> interface, Especially, the `Catalog` interface, as a core interface, is used 
> by a series of important components such as `CatalogFactory`, 
> `CatalogManager` and `FactoryUtil`, and is implemented by a large number of 
> connectors such as JDBC, Paimon, and Hive. Adding methods to it will greatly 
> increase the implementation complexity, and more importantly, increase the 
> cost of iteration, maintenance, and verification.
>  
> {{IF NOT EXISTS}}  clause: If the catalog already exists, nothing happens.
> {{COMMENT}} clause: An optional string literal. The description for the 
> catalog.
> NOTICE: we just need to introduce the '[IF NOT EXISTS]' and '[COMMENT]' 
> clause to the 'create catalog' statement.
> !image-2024-03-22-18-31-59-632.png|width=795,height=87!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35597) Fix unstable LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts

2024-06-13 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854910#comment-17854910
 ] 

Jane Chan commented on FLINK-35597:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60255=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8

> Fix unstable 
> LocatableSplitAssignerTest#testConcurrentSplitAssignmentForMultipleHosts
> -
>
> Key: FLINK-35597
> URL: https://issues.apache.org/jira/browse/FLINK-35597
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35164) Support `ALTER CATALOG RESET` syntax

2024-06-13 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-35164.
---
Fix Version/s: 1.20.0
   Resolution: Fixed

Fixed in master 9d1690387849303b27050bb0cefaa1bad6e3fb98

> Support `ALTER CATALOG RESET` syntax
> 
>
> Key: FLINK-35164
> URL: https://issues.apache.org/jira/browse/FLINK-35164
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: image-2024-04-18-23-26-59-854.png
>
>
> h3. ALTER CATALOG catalog_name RESET (key1, key2, ...)
> Reset one or more properties to its default value in the specified catalog.
> !image-2024-04-18-23-26-59-854.png|width=781,height=527!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35164) Support `ALTER CATALOG RESET` syntax

2024-06-13 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-35164.
-

> Support `ALTER CATALOG RESET` syntax
> 
>
> Key: FLINK-35164
> URL: https://issues.apache.org/jira/browse/FLINK-35164
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: image-2024-04-18-23-26-59-854.png
>
>
> h3. ALTER CATALOG catalog_name RESET (key1, key2, ...)
> Reset one or more properties to its default value in the specified catalog.
> !image-2024-04-18-23-26-59-854.png|width=781,height=527!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35164) Support `ALTER CATALOG RESET` syntax

2024-06-13 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-35164:
-

Assignee: Yubin Li

> Support `ALTER CATALOG RESET` syntax
> 
>
> Key: FLINK-35164
> URL: https://issues.apache.org/jira/browse/FLINK-35164
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-04-18-23-26-59-854.png
>
>
> h3. ALTER CATALOG catalog_name RESET (key1, key2, ...)
> Reset one or more properties to its default value in the specified catalog.
> !image-2024-04-18-23-26-59-854.png|width=781,height=527!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35569) SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging failed

2024-06-13 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-35569:
--
Fix Version/s: 1.20.0

> SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging
>  failed
> --
>
> Key: FLINK-35569
> URL: https://issues.apache.org/jira/browse/FLINK-35569
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Build System / CI
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Zakelly Lan
>Priority: Major
> Fix For: 1.20.0
>
>
> [https://github.com/apache/flink/actions/runs/9467135511/job/26081097181]
> The parameterized test is failed when RestoreMode is "CLAIM" and 
> fileMergingAcrossBoundary is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-06-13 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854642#comment-17854642
 ] 

Jane Chan commented on FLINK-35473:
---

[~xuannan] Certainly!

> FLIP-457: Improve Table/SQL Configuration for Flink 2.0
> ---
>
> Key: FLINK-35473
> URL: https://issues.apache.org/jira/browse/FLINK-35473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> This is the parent task for 
> [FLIP-457|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35569) SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging failed

2024-06-12 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854409#comment-17854409
 ] 

Jane Chan commented on FLINK-35569:
---

https://github.com/apache/flink/actions/runs/9480683299/job/26122328886

> SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging
>  failed
> --
>
> Key: FLINK-35569
> URL: https://issues.apache.org/jira/browse/FLINK-35569
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Build System / CI
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Zakelly Lan
>Priority: Major
>
> [https://github.com/apache/flink/actions/runs/9467135511/job/26081097181]
> The parameterized test is failed when RestoreMode is "CLAIM" and 
> fileMergingAcrossBoundary is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35569) SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging failed

2024-06-11 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854224#comment-17854224
 ] 

Jane Chan commented on FLINK-35569:
---

Hi [~Zakelly], would you mind sparing some time to take a look?

> SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging
>  failed
> --
>
> Key: FLINK-35569
> URL: https://issues.apache.org/jira/browse/FLINK-35569
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Build System / CI
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Priority: Major
>
> [https://github.com/apache/flink/actions/runs/9467135511/job/26081097181]
> The parameterized test is failed when RestoreMode is "CLAIM" and 
> fileMergingAcrossBoundary is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35569) SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging failed

2024-06-11 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-35569:
--
Description: 
[https://github.com/apache/flink/actions/runs/9467135511/job/26081097181]

The parameterized test is failed when RestoreMode is "CLAIM" and 
fileMergingAcrossBoundary is false.

> SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging
>  failed
> --
>
> Key: FLINK-35569
> URL: https://issues.apache.org/jira/browse/FLINK-35569
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Build System / CI
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Priority: Major
>
> [https://github.com/apache/flink/actions/runs/9467135511/job/26081097181]
> The parameterized test is failed when RestoreMode is "CLAIM" and 
> fileMergingAcrossBoundary is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35569) SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging failed

2024-06-11 Thread Jane Chan (Jira)
Jane Chan created FLINK-35569:
-

 Summary: 
SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging
 failed
 Key: FLINK-35569
 URL: https://issues.apache.org/jira/browse/FLINK-35569
 Project: Flink
  Issue Type: Bug
  Components: Build System / Azure Pipelines, Build System / CI
Affects Versions: 1.20.0
Reporter: Jane Chan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-06-11 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-35473.
-

> FLIP-457: Improve Table/SQL Configuration for Flink 2.0
> ---
>
> Key: FLINK-35473
> URL: https://issues.apache.org/jira/browse/FLINK-35473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> This is the parent task for 
> [FLIP-457|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-06-11 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854068#comment-17854068
 ] 

Jane Chan edited comment on FLINK-35473 at 6/11/24 2:10 PM:


Fixed in master
93d49ff6eb9f61cb2450d0b25732f4d8923b840d,
b7b1fc2c29995135b9005f07e385986a40c65621, 
fbf0f28fef737d47b45815d3f77c6a842167c3e8,
fbacf22a057e52c06a10988c308dfb31afbbcb12,
6dbe7bf5c306551836ec89c70f9aaab317f55e10,
526f9b034763fd022a52fe84b2c3227c59a78df1


was (Author: qingyue):
Fixed in master
93d49ff6eb9f61cb2450d0b25732f4d8923b840d,
b7b1fc2c29995135b9005f07e385986a40c65621, 
fbf0f28fef737d47b45815d3f77c6a842167c3e8,
fbacf22a057e52c06a10988c308dfb31afbbcb12,
6dbe7bf5c306551836ec89c70f9aaab317f55e10,
526f9b034763fd022a52fe84b2c3227c59a78df1

> FLIP-457: Improve Table/SQL Configuration for Flink 2.0
> ---
>
> Key: FLINK-35473
> URL: https://issues.apache.org/jira/browse/FLINK-35473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> This is the parent task for 
> [FLIP-457|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-06-11 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854068#comment-17854068
 ] 

Jane Chan edited comment on FLINK-35473 at 6/11/24 2:10 PM:


Fixed in master
93d49ff6eb9f61cb2450d0b25732f4d8923b840d,
b7b1fc2c29995135b9005f07e385986a40c65621, 
fbf0f28fef737d47b45815d3f77c6a842167c3e8,
fbacf22a057e52c06a10988c308dfb31afbbcb12,
6dbe7bf5c306551836ec89c70f9aaab317f55e10,
526f9b034763fd022a52fe84b2c3227c59a78df1


was (Author: qingyue):
Fixed in master 93d49ff6eb9f61cb2450d0b25732f4d8923b840d, 
b7b1fc2c29995135b9005f07e385986a40c65621, 
fbf0f28fef737d47b45815d3f77c6a842167c3e8, 
fbacf22a057e52c06a10988c308dfb31afbbcb12, 
6dbe7bf5c306551836ec89c70f9aaab317f55e10, 
526f9b034763fd022a52fe84b2c3227c59a78df1

> FLIP-457: Improve Table/SQL Configuration for Flink 2.0
> ---
>
> Key: FLINK-35473
> URL: https://issues.apache.org/jira/browse/FLINK-35473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> This is the parent task for 
> [FLIP-457|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-06-11 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-35473.
---
Resolution: Fixed

Fixed in master 93d49ff6eb9f61cb2450d0b25732f4d8923b840d, 
b7b1fc2c29995135b9005f07e385986a40c65621, 
fbf0f28fef737d47b45815d3f77c6a842167c3e8, 
fbacf22a057e52c06a10988c308dfb31afbbcb12, 
6dbe7bf5c306551836ec89c70f9aaab317f55e10, 
526f9b034763fd022a52fe84b2c3227c59a78df1

> FLIP-457: Improve Table/SQL Configuration for Flink 2.0
> ---
>
> Key: FLINK-35473
> URL: https://issues.apache.org/jira/browse/FLINK-35473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> This is the parent task for 
> [FLIP-457|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-06-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-35473:
--
Labels: pull-request-available  (was: )

> FLIP-457: Improve Table/SQL Configuration for Flink 2.0
> ---
>
> Key: FLINK-35473
> URL: https://issues.apache.org/jira/browse/FLINK-35473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> This is the parent task for 
> [FLIP-457|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-06-04 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851938#comment-17851938
 ] 

Jane Chan commented on FLINK-35473:
---

[~lincoln.86xy] Thank you for the reminder. I'm planning to open a PR in the 
coming days, and it would be great if you could help review it.

> FLIP-457: Improve Table/SQL Configuration for Flink 2.0
> ---
>
> Key: FLINK-35473
> URL: https://issues.apache.org/jira/browse/FLINK-35473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: 1.20.0
>
>
> This is the parent task for 
> [FLIP-457|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-05-29 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-35473:
--
Description: This is the parent task for 
[FLIP-457|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992].

> FLIP-457: Improve Table/SQL Configuration for Flink 2.0
> ---
>
> Key: FLINK-35473
> URL: https://issues.apache.org/jira/browse/FLINK-35473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>
> This is the parent task for 
> [FLIP-457|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-05-29 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-35473:
-

Assignee: Jane Chan

> FLIP-457: Improve Table/SQL Configuration for Flink 2.0
> ---
>
> Key: FLINK-35473
> URL: https://issues.apache.org/jira/browse/FLINK-35473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-05-28 Thread Jane Chan (Jira)
Jane Chan created FLINK-35473:
-

 Summary: FLIP-457: Improve Table/SQL Configuration for Flink 2.0
 Key: FLINK-35473
 URL: https://issues.apache.org/jira/browse/FLINK-35473
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.20.0
Reporter: Jane Chan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35318) incorrect timezone handling for TIMESTAMP_WITH_LOCAL_TIME_ZONE type during predicate pushdown

2024-05-21 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848071#comment-17848071
 ] 

Jane Chan commented on FLINK-35318:
---

Thanks for your contribution [~linshangquan], I'll take a look as quickly as I 
can.

> incorrect timezone handling for TIMESTAMP_WITH_LOCAL_TIME_ZONE type during 
> predicate pushdown
> -
>
> Key: FLINK-35318
> URL: https://issues.apache.org/jira/browse/FLINK-35318
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.18.1
> Environment: flink version 1.18.1
> iceberg version 1.15.1
>Reporter: linshangquan
>Assignee: linshangquan
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-05-09-14-06-58-007.png, 
> image-2024-05-09-14-09-38-453.png, image-2024-05-09-14-11-38-476.png, 
> image-2024-05-09-14-22-14-417.png, image-2024-05-09-14-22-59-370.png, 
> image-2024-05-09-18-52-03-741.png, image-2024-05-09-18-52-28-584.png
>
>
> In our scenario, we have an Iceberg table that contains a column named 'time' 
> of the {{timestamptz}} data type. This column has 10 rows of data where the 
> 'time' value is {{'2024-04-30 07:00:00'}} expressed in the "Asia/Shanghai" 
> timezone.
> !image-2024-05-09-14-06-58-007.png!
>  
> We encountered a strange phenomenon when accessing the table using 
> Iceberg-flink.
> When the {{WHERE}} clause includes the {{time}} column, the results are 
> incorrect.
> ZoneId.{_}systemDefault{_}() = "Asia/Shanghai" 
> !image-2024-05-09-18-52-03-741.png!
> When there is no {{WHERE}} clause, the results are correct.
> !image-2024-05-09-18-52-28-584.png!
> During debugging, we found that when a {{WHERE}} clause is present, a 
> {{FilterPushDownSpec}} is generated, and this {{FilterPushDownSpec}} utilizes 
> {{RexNodeToExpressionConverter}} for translation.
> !image-2024-05-09-14-11-38-476.png!
> !image-2024-05-09-14-22-59-370.png!
> When {{RexNodeToExpressionConverter#visitLiteral}} encounters a 
> {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} type, it uses the specified timezone 
> "Asia/Shanghai" to convert the {{TimestampString}} type to an {{Instant}} 
> type. However, the upstream {{TimestampString}} data has already been 
> processed in UTC timezone. By applying the local timezone processing here, an 
> error occurs due to the mismatch in timezones.
> Whether the handling of {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} type of data in 
> {{RexNodeToExpressionConverter#visitLiteral}} is a bug, and whether it should 
> process the data in UTC timezone.
>  
> Please help confirm if this is the issue, and if so, we can submit a patch to 
> fix it.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-20539) Type mismatch when using ROW in computed column

2024-05-21 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848069#comment-17848069
 ] 

Jane Chan commented on FLINK-20539:
---

I'm sorry for the late reply cause I was under the weather for a while. Many 
thanks to [~martijnvisser] and [~Sergey Nuyanzin] for discovering and 
addressing this lingering matter. 

> Type mismatch when using ROW in computed column
> ---
>
> Key: FLINK-20539
> URL: https://issues.apache.org/jira/browse/FLINK-20539
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: xuyang
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
> Fix For: 1.19.0, 1.18.2
>
>
> The following SQL:
> {code}
> env.executeSql(
>   "CREATE TABLE Orders (\n"
>   + "order_number BIGINT,\n"
>   + "priceINT,\n"
>   + "first_name   STRING,\n"
>   + "last_nameSTRING,\n"
>   + "buyer_name AS ROW(first_name, last_name)\n"
>   + ") WITH (\n"
>   + "  'connector' = 'datagen'\n"
>   + ")");
> env.executeSql("SELECT * FROM Orders").print();
> {code}
> Fails with:
> {code}
> Exception in thread "main" java.lang.AssertionError: Conversion to relational 
> algebra failed to preserve datatypes:
> validated type:
> RecordType(BIGINT order_number, INTEGER price, VARCHAR(2147483647) CHARACTER 
> SET "UTF-16LE" first_name, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" 
> last_name, RecordType:peek_no_expand(VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE" EXPR$0, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" EXPR$1) NOT 
> NULL buyer_name) NOT NULL
> converted type:
> RecordType(BIGINT order_number, INTEGER price, VARCHAR(2147483647) CHARACTER 
> SET "UTF-16LE" first_name, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" 
> last_name, RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" EXPR$0, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" EXPR$1) NOT NULL buyer_name) NOT 
> NULL
> rel:
> LogicalProject(order_number=[$0], price=[$1], first_name=[$2], 
> last_name=[$3], buyer_name=[ROW($2, $3)])
>   LogicalTableScan(table=[[default_catalog, default_database, Orders]])
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.checkConvertedType(SqlToRelConverter.java:467)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:582)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35318) incorrect timezone handling for TIMESTAMP_WITH_LOCAL_TIME_ZONE type during predicate pushdown

2024-05-10 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-35318:
-

Assignee: linshangquan  (was: Jane Chan)

> incorrect timezone handling for TIMESTAMP_WITH_LOCAL_TIME_ZONE type during 
> predicate pushdown
> -
>
> Key: FLINK-35318
> URL: https://issues.apache.org/jira/browse/FLINK-35318
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.18.1
> Environment: flink version 1.18.1
> iceberg version 1.15.1
>Reporter: linshangquan
>Assignee: linshangquan
>Priority: Major
> Attachments: image-2024-05-09-14-06-58-007.png, 
> image-2024-05-09-14-09-38-453.png, image-2024-05-09-14-11-38-476.png, 
> image-2024-05-09-14-22-14-417.png, image-2024-05-09-14-22-59-370.png, 
> image-2024-05-09-18-52-03-741.png, image-2024-05-09-18-52-28-584.png
>
>
> In our scenario, we have an Iceberg table that contains a column named 'time' 
> of the {{timestamptz}} data type. This column has 10 rows of data where the 
> 'time' value is {{'2024-04-30 07:00:00'}} expressed in the "Asia/Shanghai" 
> timezone.
> !image-2024-05-09-14-06-58-007.png!
>  
> We encountered a strange phenomenon when accessing the table using 
> Iceberg-flink.
> When the {{WHERE}} clause includes the {{time}} column, the results are 
> incorrect.
> ZoneId.{_}systemDefault{_}() = "Asia/Shanghai" 
> !image-2024-05-09-18-52-03-741.png!
> When there is no {{WHERE}} clause, the results are correct.
> !image-2024-05-09-18-52-28-584.png!
> During debugging, we found that when a {{WHERE}} clause is present, a 
> {{FilterPushDownSpec}} is generated, and this {{FilterPushDownSpec}} utilizes 
> {{RexNodeToExpressionConverter}} for translation.
> !image-2024-05-09-14-11-38-476.png!
> !image-2024-05-09-14-22-59-370.png!
> When {{RexNodeToExpressionConverter#visitLiteral}} encounters a 
> {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} type, it uses the specified timezone 
> "Asia/Shanghai" to convert the {{TimestampString}} type to an {{Instant}} 
> type. However, the upstream {{TimestampString}} data has already been 
> processed in UTC timezone. By applying the local timezone processing here, an 
> error occurs due to the mismatch in timezones.
> Whether the handling of {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} type of data in 
> {{RexNodeToExpressionConverter#visitLiteral}} is a bug, and whether it should 
> process the data in UTC timezone.
>  
> Please help confirm if this is the issue, and if so, we can submit a patch to 
> fix it.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35318) incorrect timezone handling for TIMESTAMP_WITH_LOCAL_TIME_ZONE type during predicate pushdown

2024-05-10 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-35318:
-

Assignee: Jane Chan

> incorrect timezone handling for TIMESTAMP_WITH_LOCAL_TIME_ZONE type during 
> predicate pushdown
> -
>
> Key: FLINK-35318
> URL: https://issues.apache.org/jira/browse/FLINK-35318
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.18.1
> Environment: flink version 1.18.1
> iceberg version 1.15.1
>Reporter: linshangquan
>Assignee: Jane Chan
>Priority: Major
> Attachments: image-2024-05-09-14-06-58-007.png, 
> image-2024-05-09-14-09-38-453.png, image-2024-05-09-14-11-38-476.png, 
> image-2024-05-09-14-22-14-417.png, image-2024-05-09-14-22-59-370.png, 
> image-2024-05-09-18-52-03-741.png, image-2024-05-09-18-52-28-584.png
>
>
> In our scenario, we have an Iceberg table that contains a column named 'time' 
> of the {{timestamptz}} data type. This column has 10 rows of data where the 
> 'time' value is {{'2024-04-30 07:00:00'}} expressed in the "Asia/Shanghai" 
> timezone.
> !image-2024-05-09-14-06-58-007.png!
>  
> We encountered a strange phenomenon when accessing the table using 
> Iceberg-flink.
> When the {{WHERE}} clause includes the {{time}} column, the results are 
> incorrect.
> ZoneId.{_}systemDefault{_}() = "Asia/Shanghai" 
> !image-2024-05-09-18-52-03-741.png!
> When there is no {{WHERE}} clause, the results are correct.
> !image-2024-05-09-18-52-28-584.png!
> During debugging, we found that when a {{WHERE}} clause is present, a 
> {{FilterPushDownSpec}} is generated, and this {{FilterPushDownSpec}} utilizes 
> {{RexNodeToExpressionConverter}} for translation.
> !image-2024-05-09-14-11-38-476.png!
> !image-2024-05-09-14-22-59-370.png!
> When {{RexNodeToExpressionConverter#visitLiteral}} encounters a 
> {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} type, it uses the specified timezone 
> "Asia/Shanghai" to convert the {{TimestampString}} type to an {{Instant}} 
> type. However, the upstream {{TimestampString}} data has already been 
> processed in UTC timezone. By applying the local timezone processing here, an 
> error occurs due to the mismatch in timezones.
> Whether the handling of {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} type of data in 
> {{RexNodeToExpressionConverter#visitLiteral}} is a bug, and whether it should 
> process the data in UTC timezone.
>  
> Please help confirm if this is the issue, and if so, we can submit a patch to 
> fix it.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35318) incorrect timezone handling for TIMESTAMP_WITH_LOCAL_TIME_ZONE type during predicate pushdown

2024-05-10 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845408#comment-17845408
 ] 

Jane Chan commented on FLINK-35318:
---

Hi [~linshangquan], thanks for reporting this issue. Your understanding is 
correct. 
RexNodeToExpressionConverter#visitLiteral should not convert the literal to UTC 
again since this has been done before at the SQL to Rel phase.

> incorrect timezone handling for TIMESTAMP_WITH_LOCAL_TIME_ZONE type during 
> predicate pushdown
> -
>
> Key: FLINK-35318
> URL: https://issues.apache.org/jira/browse/FLINK-35318
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.18.1
> Environment: flink version 1.18.1
> iceberg version 1.15.1
>Reporter: linshangquan
>Priority: Major
> Attachments: image-2024-05-09-14-06-58-007.png, 
> image-2024-05-09-14-09-38-453.png, image-2024-05-09-14-11-38-476.png, 
> image-2024-05-09-14-22-14-417.png, image-2024-05-09-14-22-59-370.png, 
> image-2024-05-09-18-52-03-741.png, image-2024-05-09-18-52-28-584.png
>
>
> In our scenario, we have an Iceberg table that contains a column named 'time' 
> of the {{timestamptz}} data type. This column has 10 rows of data where the 
> 'time' value is {{'2024-04-30 07:00:00'}} expressed in the "Asia/Shanghai" 
> timezone.
> !image-2024-05-09-14-06-58-007.png!
>  
> We encountered a strange phenomenon when accessing the table using 
> Iceberg-flink.
> When the {{WHERE}} clause includes the {{time}} column, the results are 
> incorrect.
> ZoneId.{_}systemDefault{_}() = "Asia/Shanghai" 
> !image-2024-05-09-18-52-03-741.png!
> When there is no {{WHERE}} clause, the results are correct.
> !image-2024-05-09-18-52-28-584.png!
> During debugging, we found that when a {{WHERE}} clause is present, a 
> {{FilterPushDownSpec}} is generated, and this {{FilterPushDownSpec}} utilizes 
> {{RexNodeToExpressionConverter}} for translation.
> !image-2024-05-09-14-11-38-476.png!
> !image-2024-05-09-14-22-59-370.png!
> When {{RexNodeToExpressionConverter#visitLiteral}} encounters a 
> {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} type, it uses the specified timezone 
> "Asia/Shanghai" to convert the {{TimestampString}} type to an {{Instant}} 
> type. However, the upstream {{TimestampString}} data has already been 
> processed in UTC timezone. By applying the local timezone processing here, an 
> error occurs due to the mismatch in timezones.
> Whether the handling of {{TIMESTAMP_WITH_LOCAL_TIME_ZONE}} type of data in 
> {{RexNodeToExpressionConverter#visitLiteral}} is a bug, and whether it should 
> process the data in UTC timezone.
>  
> Please help confirm if this is the issue, and if so, we can submit a patch to 
> fix it.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34916) Support `ALTER CATALOG SET` syntax

2024-05-08 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-34916.
---
Fix Version/s: 1.20.0
   Resolution: Fixed

Fixed in master 4611817591c38019c27ffad6d8cdc68292f079a4

> Support `ALTER CATALOG SET` syntax
> --
>
> Key: FLINK-34916
> URL: https://issues.apache.org/jira/browse/FLINK-34916
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: image-2024-03-22-18-30-33-182.png
>
>
> Set one or more properties in the specified catalog. If a particular property 
> is already set in the catalog, override the old value with the new one.
> !image-2024-03-22-18-30-33-182.png|width=736,height=583!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34916) Support `ALTER CATALOG SET` syntax

2024-05-08 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-34916.
-

> Support `ALTER CATALOG SET` syntax
> --
>
> Key: FLINK-34916
> URL: https://issues.apache.org/jira/browse/FLINK-34916
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: image-2024-03-22-18-30-33-182.png
>
>
> Set one or more properties in the specified catalog. If a particular property 
> is already set in the catalog, override the old value with the new one.
> !image-2024-03-22-18-30-33-182.png|width=736,height=583!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34915) Complete `DESCRIBE CATALOG` syntax

2024-04-28 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-34915.
---
Fix Version/s: 1.20.0
   Resolution: Fixed

> Complete `DESCRIBE CATALOG` syntax
> --
>
> Key: FLINK-34915
> URL: https://issues.apache.org/jira/browse/FLINK-34915
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: image-2024-03-22-18-29-00-454.png, 
> image-2024-04-07-17-54-51-203.png
>
>
> Describe the metadata of an existing catalog. The metadata information 
> includes the catalog’s name, type, and comment. If the optional {{EXTENDED}} 
> option is specified, catalog properties are also returned.
> NOTICE: The parser part of this syntax has been implemented in FLIP-69 , and 
> it is not actually available. we can complete the syntax in this FLIP. 
> !image-2024-04-07-17-54-51-203.png|width=545,height=332!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34915) Complete `DESCRIBE CATALOG` syntax

2024-04-28 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-34915.
-

> Complete `DESCRIBE CATALOG` syntax
> --
>
> Key: FLINK-34915
> URL: https://issues.apache.org/jira/browse/FLINK-34915
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
> Attachments: image-2024-03-22-18-29-00-454.png, 
> image-2024-04-07-17-54-51-203.png
>
>
> Describe the metadata of an existing catalog. The metadata information 
> includes the catalog’s name, type, and comment. If the optional {{EXTENDED}} 
> option is specified, catalog properties are also returned.
> NOTICE: The parser part of this syntax has been implemented in FLIP-69 , and 
> it is not actually available. we can complete the syntax in this FLIP. 
> !image-2024-04-07-17-54-51-203.png|width=545,height=332!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34915) Complete `DESCRIBE CATALOG` syntax

2024-04-28 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17841635#comment-17841635
 ] 

Jane Chan commented on FLINK-34915:
---

Fixed in master e412402ca4dfc438e28fb990dc53ea7809430aee

> Complete `DESCRIBE CATALOG` syntax
> --
>
> Key: FLINK-34915
> URL: https://issues.apache.org/jira/browse/FLINK-34915
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-03-22-18-29-00-454.png, 
> image-2024-04-07-17-54-51-203.png
>
>
> Describe the metadata of an existing catalog. The metadata information 
> includes the catalog’s name, type, and comment. If the optional {{EXTENDED}} 
> option is specified, catalog properties are also returned.
> NOTICE: The parser part of this syntax has been implemented in FLIP-69 , and 
> it is not actually available. we can complete the syntax in this FLIP. 
> !image-2024-04-07-17-54-51-203.png|width=545,height=332!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34633) Support unnesting array constants

2024-04-17 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-34633.
---
Resolution: Fixed

> Support unnesting array constants
> -
>
> Key: FLINK-34633
> URL: https://issues.apache.org/jira/browse/FLINK-34633
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.18.1
>Reporter: Xingcan Cui
>Assignee: Jeyhun Karimov
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.1
>
>
> It seems that the current planner doesn't support using UNNEST on array 
> constants.(x)
> {code:java}
> SELECT * FROM UNNEST(ARRAY[1,2,3]);{code}
>  
> The following query can't be compiled.(x)
> {code:java}
> SELECT * FROM (VALUES('a')) CROSS JOIN UNNEST(ARRAY[1, 2, 3]){code}
>  
> The rewritten version works. (/)
> {code:java}
> SELECT * FROM (SELECT *, ARRAY[1,2,3] AS A FROM (VALUES('a'))) CROSS JOIN 
> UNNEST(A){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34633) Support unnesting array constants

2024-04-17 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-34633.
-

> Support unnesting array constants
> -
>
> Key: FLINK-34633
> URL: https://issues.apache.org/jira/browse/FLINK-34633
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.18.1
>Reporter: Xingcan Cui
>Assignee: Jeyhun Karimov
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.1
>
>
> It seems that the current planner doesn't support using UNNEST on array 
> constants.(x)
> {code:java}
> SELECT * FROM UNNEST(ARRAY[1,2,3]);{code}
>  
> The following query can't be compiled.(x)
> {code:java}
> SELECT * FROM (VALUES('a')) CROSS JOIN UNNEST(ARRAY[1, 2, 3]){code}
>  
> The rewritten version works. (/)
> {code:java}
> SELECT * FROM (SELECT *, ARRAY[1,2,3] AS A FROM (VALUES('a'))) CROSS JOIN 
> UNNEST(A){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34633) Support unnesting array constants

2024-04-17 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838429#comment-17838429
 ] 

Jane Chan commented on FLINK-34633:
---

Fixed in master 43a3d50ce3982b9abf04b81407fed46c5c25f819

> Support unnesting array constants
> -
>
> Key: FLINK-34633
> URL: https://issues.apache.org/jira/browse/FLINK-34633
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.18.1
>Reporter: Xingcan Cui
>Assignee: Jeyhun Karimov
>Priority: Minor
>  Labels: pull-request-available
>
> It seems that the current planner doesn't support using UNNEST on array 
> constants.(x)
> {code:java}
> SELECT * FROM UNNEST(ARRAY[1,2,3]);{code}
>  
> The following query can't be compiled.(x)
> {code:java}
> SELECT * FROM (VALUES('a')) CROSS JOIN UNNEST(ARRAY[1, 2, 3]){code}
>  
> The rewritten version works. (/)
> {code:java}
> SELECT * FROM (SELECT *, ARRAY[1,2,3] AS A FROM (VALUES('a'))) CROSS JOIN 
> UNNEST(A){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34633) Support unnesting array constants

2024-04-17 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-34633:
--
Fix Version/s: 1.19.1

> Support unnesting array constants
> -
>
> Key: FLINK-34633
> URL: https://issues.apache.org/jira/browse/FLINK-34633
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.18.1
>Reporter: Xingcan Cui
>Assignee: Jeyhun Karimov
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.1
>
>
> It seems that the current planner doesn't support using UNNEST on array 
> constants.(x)
> {code:java}
> SELECT * FROM UNNEST(ARRAY[1,2,3]);{code}
>  
> The following query can't be compiled.(x)
> {code:java}
> SELECT * FROM (VALUES('a')) CROSS JOIN UNNEST(ARRAY[1, 2, 3]){code}
>  
> The rewritten version works. (/)
> {code:java}
> SELECT * FROM (SELECT *, ARRAY[1,2,3] AS A FROM (VALUES('a'))) CROSS JOIN 
> UNNEST(A){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-24939) Support 'SHOW CREATE CATALOG' syntax

2024-04-06 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834588#comment-17834588
 ] 

Jane Chan commented on FLINK-24939:
---

Fixed in master 2747a5814bcc5cd45f15c023beba9b0644fe1ead

> Support 'SHOW CREATE CATALOG' syntax
> 
>
> Key: FLINK-24939
> URL: https://issues.apache.org/jira/browse/FLINK-24939
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.14.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
>
> SHOW CREATE CATALOG ;
>  
> `Catalog` is playing a more import role in flink, it would be great to get 
> existing catalog detail information



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-24939) Support 'SHOW CREATE CATALOG' syntax

2024-04-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-24939.
-

> Support 'SHOW CREATE CATALOG' syntax
> 
>
> Key: FLINK-24939
> URL: https://issues.apache.org/jira/browse/FLINK-24939
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.14.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> SHOW CREATE CATALOG ;
>  
> `Catalog` is playing a more import role in flink, it would be great to get 
> existing catalog detail information



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-24939) Support 'SHOW CREATE CATALOG' syntax

2024-04-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-24939.
---
Resolution: Fixed

> Support 'SHOW CREATE CATALOG' syntax
> 
>
> Key: FLINK-24939
> URL: https://issues.apache.org/jira/browse/FLINK-24939
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.14.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> SHOW CREATE CATALOG ;
>  
> `Catalog` is playing a more import role in flink, it would be great to get 
> existing catalog detail information



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24939) Support 'SHOW CREATE CATALOG' syntax

2024-04-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-24939:
--
Fix Version/s: 1.20.0

> Support 'SHOW CREATE CATALOG' syntax
> 
>
> Key: FLINK-24939
> URL: https://issues.apache.org/jira/browse/FLINK-24939
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.14.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> SHOW CREATE CATALOG ;
>  
> `Catalog` is playing a more import role in flink, it would be great to get 
> existing catalog detail information



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34917) Support enhanced `CREATE CATALOG` syntax

2024-03-24 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-34917:
-

Assignee: Yubin Li

> Support enhanced `CREATE CATALOG` syntax
> 
>
> Key: FLINK-34917
> URL: https://issues.apache.org/jira/browse/FLINK-34917
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
> Attachments: image-2024-03-22-18-31-59-632.png
>
>
> {{IF NOT EXISTS}}  clause: If the catalog already exists, nothing happens.
> {{COMMENT}} clause: An optional string literal. The description for the 
> catalog.
> NOTICE: we just need to introduce the '[IF NOT EXISTS]' and '[COMMENT]' 
> clause to the 'create catalog' statement.
> !image-2024-03-22-18-31-59-632.png|width=795,height=87!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34918) Introduce the support of Catalog for comments

2024-03-24 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-34918:
-

Assignee: Yubin Li

> Introduce the support of Catalog for comments
> -
>
> Key: FLINK-34918
> URL: https://issues.apache.org/jira/browse/FLINK-34918
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>
> We propose to introduce `getComment()` method in `CatalogDescriptor`, and the 
> reasons are as follows.
> 1. For the sake of design consistency, follow the design of FLIP-295 [1] 
> which introduced `CatalogStore` component, `CatalogDescriptor` includes names 
> and attributes, both of which are used to describe the catalog, and `comment` 
> can be added smoothly.
> 2. Extending the existing class rather than add new method to the existing 
> interface, Especially, the `Catalog` interface, as a core interface, is used 
> by a series of important components such as `CatalogFactory`, 
> `CatalogManager` and `FactoryUtil`, and is implemented by a large number of 
> connectors such as JDBC, Paimon, and Hive. Adding methods to it will greatly 
> increase the implementation complexity, and more importantly, increase the 
> cost of iteration, maintenance, and verification.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34916) Support `ALTER CATALOG` syntax

2024-03-24 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-34916:
-

Assignee: Yubin Li

> Support `ALTER CATALOG` syntax
> --
>
> Key: FLINK-34916
> URL: https://issues.apache.org/jira/browse/FLINK-34916
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
> Attachments: image-2024-03-22-18-30-33-182.png
>
>
> Set one or more properties in the specified catalog. If a particular property 
> is already set in the catalog, override the old value with the new one.
> !image-2024-03-22-18-30-33-182.png|width=736,height=583!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34915) Support `DESCRIBE CATALOG` syntax

2024-03-24 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-34915:
-

Assignee: Yubin Li

> Support `DESCRIBE CATALOG` syntax
> -
>
> Key: FLINK-34915
> URL: https://issues.apache.org/jira/browse/FLINK-34915
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
> Attachments: image-2024-03-22-18-29-00-454.png
>
>
> Describe the metadata of an existing catalog. The metadata information 
> includes the catalog’s name, type, and comment. If the optional {{EXTENDED}} 
> option is specified, catalog properties are also returned.
> NOTICE: The parser part of this syntax has been implemented in FLIP-69 , and 
> it is not actually available. we can complete the syntax in this FLIP. 
> !image-2024-03-22-18-29-00-454.png|width=561,height=374!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-19 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-29114.
-

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.17.3, 1.18.2, 1.20.0, 1.19.1
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
> 

[jira] [Updated] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-19 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-29114:
--
Affects Version/s: 1.18.0
   1.17.0

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.17.0, 1.18.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.17.3, 1.18.2, 1.20.0, 1.19.1
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> 

[jira] [Updated] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-19 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-29114:
--
Fix Version/s: 1.17.3
   1.18.2

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.17.3, 1.18.2, 1.20.0, 1.19.1
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> 

[jira] [Resolved] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-19 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-29114.
---
Resolution: Fixed

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.17.3, 1.18.2, 1.20.0, 1.19.1
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> 

[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-19 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828216#comment-17828216
 ] 

Jane Chan commented on FLINK-29114:
---

Fixed in release-1.17 0b430c2e614a7a9936e5eedf49e87393d8bc7a77

Fixed in release-1.18 c2a85ac15003f03682979c617424c74875e19137

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.20.0, 1.19.1
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> 

[jira] [Updated] (FLINK-34669) Optimization of Arch Rules for Connector Constraints and Violation File Updates

2024-03-14 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-34669:
--
Description: 
Description:
Potential optimization within our Arch rules that could improve the development 
workflow. This originated from the discussion for PR 
[https://github.com/apache/flink/pull/24492]

1. Connector Constraints:
The current Arch rule, `CONNECTOR_CLASSES_ONLY_DEPEND_ON_PUBLIC_API`, was 
implemented to prevent internal code changes in Flink from affecting the 
compilation of connectors in external repositories. This rule is crucial for 
connectors that are external, but it may be unnecessarily restrictive for the 
filesystem connector, which remains within the same code repository as Flink. 
Maybe we should consider excluding the filesystem connector from this rule to 
better reflect its status as an internal component.

2. Preconditions Class Promotion:
The majority of Arch rule violations for connectors are related to the use of 
`Preconditions#checkX`. This consistent pattern of violations prompts the 
question of whether we should reclassify `Preconditions` from its current 
internal status to a `Public` or `PublicEvolving` interface, allowing broader 
and more official usage within our codebase.

3. Violation File Updates:
Updating the violation file following the `freeze.refreeze=true` process 
outlined in the readme proves to be difficult. The diffs generated include the 
line numbers, which complicates the review process, especially when substantial 
changes are submitted. Reviewers face a considerable challenge in 
distinguishing between meaningful changes and mere line number alterations. To 
alleviate this issue, I suggest that we modify the process so that line numbers 
are not included in the violation file diffs, streamlining reviews and commits.

  was:
Description:
Potential optimization within our Arch rules that could improve the development 
workflow. This originated from the [discussion in PR 
#24492|https://github.com/apache/flink/pull/24492#pullrequestreview-1936277808]

1. Connector Constraints:
Our current Arch rule, `CONNECTOR_CLASSES_ONLY_DEPEND_ON_PUBLIC_API`, was 
implemented to prevent internal code changes in Flink from affecting the 
compilation of connectors in external repositories. This rule is crucial for 
connectors that are external, but it may be unnecessarily restrictive for the 
filesystem connector, which remains within the same code repository as Flink. 
Maybe we should consider excluding the filesystem connector from this rule to 
better reflect its status as an internal component.

2. Preconditions Class Promotion:
The majority of Arch rule violations for connectors are related to the use of 
`Preconditions#checkX`. This consistent pattern of violations prompts the 
question of whether we should reclassify `Preconditions` from its current 
internal status to a `Public` or `PublicEvolving` interface, allowing broader 
and more official usage within our codebase.

3. Violation File Updates:
Updating the violation file following the `freeze.refreeze=true` process 
outlined in the readme proves to be difficult. The diffs generated include the 
line numbers, which complicates the review process, especially when substantial 
changes are submitted. Reviewers face a considerable challenge in 
distinguishing between meaningful changes and mere line number alterations. To 
alleviate this issue, I suggest that we modify the process so that line numbers 
are not included in the violation file diffs, streamlining reviews and commits.


> Optimization of Arch Rules for Connector Constraints and Violation File 
> Updates
> ---
>
> Key: FLINK-34669
> URL: https://issues.apache.org/jira/browse/FLINK-34669
> Project: Flink
>  Issue Type: Improvement
>  Components: Test Infrastructure
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Priority: Major
>
> Description:
> Potential optimization within our Arch rules that could improve the 
> development workflow. This originated from the discussion for PR 
> [https://github.com/apache/flink/pull/24492]
> 1. Connector Constraints:
> The current Arch rule, `CONNECTOR_CLASSES_ONLY_DEPEND_ON_PUBLIC_API`, was 
> implemented to prevent internal code changes in Flink from affecting the 
> compilation of connectors in external repositories. This rule is crucial for 
> connectors that are external, but it may be unnecessarily restrictive for the 
> filesystem connector, which remains within the same code repository as Flink. 
> Maybe we should consider excluding the filesystem connector from this rule to 
> better reflect its status as an internal component.
> 2. Preconditions Class Promotion:
> The majority of Arch rule violations for connectors are related to the use of 

[jira] [Updated] (FLINK-34669) Optimization of Arch Rules for Connector Constraints and Violation File Updates

2024-03-14 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-34669:
--
Description: 
Description:
Potential optimization within our Arch rules that could improve the development 
workflow. This originated from the discussion for PR 
[https://github.com/apache/flink/pull/24492]

1. Connector Constraints:
Our current Arch rule, `CONNECTOR_CLASSES_ONLY_DEPEND_ON_PUBLIC_API`, was 
implemented to prevent internal code changes in Flink from affecting the 
compilation of connectors in external repositories. This rule is crucial for 
connectors that are external, but it may be unnecessarily restrictive for the 
filesystem connector, which remains within the same code repository as Flink. 
Maybe we should consider excluding the filesystem connector from this rule to 
better reflect its status as an internal component.

2. Preconditions Class Promotion:
The majority of Arch rule violations for connectors are related to the use of 
`Preconditions#checkX`. This consistent pattern of violations prompts the 
question of whether we should reclassify `Preconditions` from its current 
internal status to a `Public` or `PublicEvolving` interface, allowing broader 
and more official usage within our codebase.

3. Violation File Updates:
Updating the violation file following the `freeze.refreeze=true` process 
outlined in the readme proves to be difficult. The diffs generated include the 
line numbers, which complicates the review process, especially when substantial 
changes are submitted. Reviewers face a considerable challenge in 
distinguishing between meaningful changes and mere line number alterations. To 
alleviate this issue, I suggest that we modify the process so that line numbers 
are not included in the violation file diffs, streamlining reviews and commits.

  was:
Description:
I have identified potential areas for optimization within our Arch rules that 
could improve our development workflow. This originated from the discussion for 
PR https://github.com/apache/flink/pull/24492

1. Connector Constraints:
Our current Arch rule, `CONNECTOR_CLASSES_ONLY_DEPEND_ON_PUBLIC_API`, was 
implemented to prevent internal code changes in Flink from affecting the 
compilation of connectors in external repositories. This rule is crucial for 
connectors that are external, but it may be unnecessarily restrictive for the 
filesystem connector, which remains within the same code repository as Flink. 
Maybe we should consider excluding the filesystem connector from this rule to 
better reflect its status as an internal component.

2. Preconditions Class Promotion:
The majority of Arch rule violations for connectors are related to the use of 
`Preconditions#checkX`. This consistent pattern of violations prompts the 
question of whether we should reclassify `Preconditions` from its current 
internal status to a `Public` or `PublicEvolving` interface, allowing broader 
and more official usage within our codebase.

3. Violation File Updates:
Updating the violation file following the `freeze.refreeze=true` process 
outlined in the readme proves to be difficult. The diffs generated include the 
line numbers, which complicates the review process, especially when substantial 
changes are submitted. Reviewers face a considerable challenge in 
distinguishing between meaningful changes and mere line number alterations. To 
alleviate this issue, I suggest that we modify the process so that line numbers 
are not included in the violation file diffs, streamlining reviews and commits.


> Optimization of Arch Rules for Connector Constraints and Violation File 
> Updates
> ---
>
> Key: FLINK-34669
> URL: https://issues.apache.org/jira/browse/FLINK-34669
> Project: Flink
>  Issue Type: Improvement
>  Components: Test Infrastructure
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Priority: Major
>
> Description:
> Potential optimization within our Arch rules that could improve the 
> development workflow. This originated from the discussion for PR 
> [https://github.com/apache/flink/pull/24492]
> 1. Connector Constraints:
> Our current Arch rule, `CONNECTOR_CLASSES_ONLY_DEPEND_ON_PUBLIC_API`, was 
> implemented to prevent internal code changes in Flink from affecting the 
> compilation of connectors in external repositories. This rule is crucial for 
> connectors that are external, but it may be unnecessarily restrictive for the 
> filesystem connector, which remains within the same code repository as Flink. 
> Maybe we should consider excluding the filesystem connector from this rule to 
> better reflect its status as an internal component.
> 2. Preconditions Class Promotion:
> The majority of Arch rule violations for connectors are related to the use of 
> 

[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-14 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17827116#comment-17827116
 ] 

Jane Chan commented on FLINK-29114:
---

Sorry for the inconvenience, and thanks for correcting the version. cc 
[~leonard], [~pnowojski] and [~mapohl]

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.20.0, 1.19.1
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> 

[jira] [Updated] (FLINK-34669) Optimization of Arch Rules for Connector Constraints and Violation File Updates

2024-03-14 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-34669:
--
Affects Version/s: 1.20.0

> Optimization of Arch Rules for Connector Constraints and Violation File 
> Updates
> ---
>
> Key: FLINK-34669
> URL: https://issues.apache.org/jira/browse/FLINK-34669
> Project: Flink
>  Issue Type: Improvement
>  Components: Test Infrastructure
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Priority: Major
>
> Description:
> I have identified potential areas for optimization within our Arch rules that 
> could improve our development workflow. This originated from the discussion 
> for PR https://github.com/apache/flink/pull/24492
> 1. Connector Constraints:
> Our current Arch rule, `CONNECTOR_CLASSES_ONLY_DEPEND_ON_PUBLIC_API`, was 
> implemented to prevent internal code changes in Flink from affecting the 
> compilation of connectors in external repositories. This rule is crucial for 
> connectors that are external, but it may be unnecessarily restrictive for the 
> filesystem connector, which remains within the same code repository as Flink. 
> Maybe we should consider excluding the filesystem connector from this rule to 
> better reflect its status as an internal component.
> 2. Preconditions Class Promotion:
> The majority of Arch rule violations for connectors are related to the use of 
> `Preconditions#checkX`. This consistent pattern of violations prompts the 
> question of whether we should reclassify `Preconditions` from its current 
> internal status to a `Public` or `PublicEvolving` interface, allowing broader 
> and more official usage within our codebase.
> 3. Violation File Updates:
> Updating the violation file following the `freeze.refreeze=true` process 
> outlined in the readme proves to be difficult. The diffs generated include 
> the line numbers, which complicates the review process, especially when 
> substantial changes are submitted. Reviewers face a considerable challenge in 
> distinguishing between meaningful changes and mere line number alterations. 
> To alleviate this issue, I suggest that we modify the process so that line 
> numbers are not included in the violation file diffs, streamlining reviews 
> and commits.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34669) Optimization of Arch Rules for Connector Constraints and Violation File Updates

2024-03-14 Thread Jane Chan (Jira)
Jane Chan created FLINK-34669:
-

 Summary: Optimization of Arch Rules for Connector Constraints and 
Violation File Updates
 Key: FLINK-34669
 URL: https://issues.apache.org/jira/browse/FLINK-34669
 Project: Flink
  Issue Type: Improvement
  Components: Test Infrastructure
Reporter: Jane Chan


Description:
I have identified potential areas for optimization within our Arch rules that 
could improve our development workflow. This originated from the discussion for 
PR https://github.com/apache/flink/pull/24492

1. Connector Constraints:
Our current Arch rule, `CONNECTOR_CLASSES_ONLY_DEPEND_ON_PUBLIC_API`, was 
implemented to prevent internal code changes in Flink from affecting the 
compilation of connectors in external repositories. This rule is crucial for 
connectors that are external, but it may be unnecessarily restrictive for the 
filesystem connector, which remains within the same code repository as Flink. 
Maybe we should consider excluding the filesystem connector from this rule to 
better reflect its status as an internal component.

2. Preconditions Class Promotion:
The majority of Arch rule violations for connectors are related to the use of 
`Preconditions#checkX`. This consistent pattern of violations prompts the 
question of whether we should reclassify `Preconditions` from its current 
internal status to a `Public` or `PublicEvolving` interface, allowing broader 
and more official usage within our codebase.

3. Violation File Updates:
Updating the violation file following the `freeze.refreeze=true` process 
outlined in the readme proves to be difficult. The diffs generated include the 
line numbers, which complicates the review process, especially when substantial 
changes are submitted. Reviewers face a considerable challenge in 
distinguishing between meaningful changes and mere line number alterations. To 
alleviate this issue, I suggest that we modify the process so that line numbers 
are not included in the violation file diffs, streamlining reviews and commits.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-14 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17827065#comment-17827065
 ] 

Jane Chan commented on FLINK-29114:
---

Fixed in release-1.19 4d5327d40a3ca3e6845f779ca6508733fd630bae

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.19.0, 1.20.0
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> 

[jira] [Updated] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-14 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-29114:
--
Fix Version/s: 1.19.0

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.19.0, 1.20.0
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> 

[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-13 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826925#comment-17826925
 ] 

Jane Chan commented on FLINK-29114:
---

Thanks [~mapohl], I've opened a cherry-pick 
[PR|https://github.com/apache/flink/pull/24492], and it would be great if you 
could help review it.

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.20.0
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> 

[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-13 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825997#comment-17825997
 ] 

Jane Chan commented on FLINK-29114:
---

Not sure whether it's a good timing to pick into release-1.19 considering that 
it's already being prepared for release.

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Major
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.20.0
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> 

[jira] [Updated] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-13 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-29114:
--
Fix Version/s: 1.20.0

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Major
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.20.0
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)

[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-13 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825993#comment-17825993
 ] 

Jane Chan commented on FLINK-29114:
---

Fixed in master 7d0111dfab640f2f590dd710d76de927c86cf83e

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Major
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> 

[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-28 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821941#comment-17821941
 ] 

Jane Chan commented on FLINK-29114:
---

[~hackergin] Different sink paths could avoid the unstable case. However, the 
problem lies in the way of generating the staging dir path. It's unreliable to 
rely solely on the timestamp as a path postfix.

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Major
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> 

[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-27 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821114#comment-17821114
 ] 

Jane Chan commented on FLINK-29114:
---

h3. A short summary

Prior to the commit, files written to the sink are temporarily stored under the 
staging directory, with the following file structure.
{code:java}
target_dir/
├─ .staging_timestamp/
│  ├─ task-${subtaskId}-attempt-${attemptNumber}/{code}

When employing statement set syntax or alternative methods to write multiple 
sink outputs, specifying a singular target path can inadvertently result in 
various sink streams sharing the same staging directory. This scenario arises 
in rare cases where System.currentTimeMillis() returns identical values across 
different sinks.

In the commit phase, the staging directory is deleted once the commit is 
finished. Consequently, this may lead to a situation where another sink task, 
attempting to commit concurrently, fails to locate its corresponding staging 
directory. The absence of the staging directory impedes the sink task's ability 
to commit correctly, potentially leading to erroneous computation outcomes.

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Major
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     

[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-26 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821024#comment-17821024
 ] 

Jane Chan commented on FLINK-29114:
---

Only this particular case encounters this issue because it writes to the same 
sink table using a statement set, and the table is non-partitioned. Normally, 
people wouldn't do that.

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> 

[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-26 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821012#comment-17821012
 ] 

Jane Chan commented on FLINK-29114:
---

Update: I added a random UUID to the staging dir, and now the tests run well 
within 500 repeats.

!image-2024-02-27-15-32-48-317.png|width=1036,height=307!

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> 

[jira] [Assigned] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-26 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-29114:
-

Assignee: Jane Chan

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
>     at 
> 

[jira] [Updated] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-26 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-29114:
--
Attachment: image-2024-02-27-15-32-48-317.png

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
>     at 
> 

[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-26 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821009#comment-17821009
 ] 

Jane Chan commented on FLINK-29114:
---

Hi [~mapohl], sorry for the late reply due to a tight time budget. I've added 
some debug code and found some clues.

I think the root cause lies in the way to generate the staging directory for 
the filesystem sink. Please see 
[FileSystemTableSink.java#L377|https://github.com/apache/flink/blob/1070c6e9e0f9f00991bdeb34f0757e4f0597931e/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/table/FileSystemTableSink.java#L377]
System.currentTimeMillis() might lead to the same value in some rare 
conditions, which leads to staging dir conflicts.
I added some debug logs, and here are the details.

First, I changed @TempDir to CleanupMode.NEVER, and then noticed that for a 
failed case, only one file generated.

!image-2024-02-27-15-23-49-494.png|width=799,height=414!

 

Then, I added the log and found the staging dir conflicts.

!image-2024-02-27-15-26-07-657.png|width=793,height=462!

 

I think we can improve the staging dir generation method to fix this problem.

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at 

[jira] [Updated] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-26 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-29114:
--
Attachment: image-2024-02-27-15-26-07-657.png

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
>     at 
> 

[jira] [Updated] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-26 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-29114:
--
Attachment: image-2024-02-27-15-23-49-494.png

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
>     

[jira] [Resolved] (FLINK-34362) Add argument to reuse connector docs cache in setup_docs.sh to improve build times

2024-02-25 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-34362.
---
Fix Version/s: 1.19.0
   Resolution: Fixed

> Add argument to reuse connector docs cache in setup_docs.sh to improve build 
> times
> --
>
> Key: FLINK-34362
> URL: https://issues.apache.org/jira/browse/FLINK-34362
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Assignee: Yunhong Zheng
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Problem:
> The current build process of Flink's documentation involves the 
> `setup_docs.sh` script, which re-clones connector repositories every time the 
> documentation is built. This operation is time-consuming, particularly for 
> developers in regions with slower internet connections or facing network 
> restrictions (like the Great Firewall in China). This results in a build 
> process that can take an excessive amount of time, hindering developer 
> productivity.
>  
> Proposal:
> We could add a command-line argument (e.g., --use-doc-cache) to the 
> `setup_docs.sh` script, which, when set, skips the cloning step if the 
> connector repositories have already been cloned previously. As a result, 
> developers can opt to use the cache when they do not require the latest 
> versions of the connectors' documentation. This change will reduce build 
> times significantly and improve the developer experience for those working on 
> the documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34362) Add argument to reuse connector docs cache in setup_docs.sh to improve build times

2024-02-25 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-34362.
-

> Add argument to reuse connector docs cache in setup_docs.sh to improve build 
> times
> --
>
> Key: FLINK-34362
> URL: https://issues.apache.org/jira/browse/FLINK-34362
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Assignee: Yunhong Zheng
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Problem:
> The current build process of Flink's documentation involves the 
> `setup_docs.sh` script, which re-clones connector repositories every time the 
> documentation is built. This operation is time-consuming, particularly for 
> developers in regions with slower internet connections or facing network 
> restrictions (like the Great Firewall in China). This results in a build 
> process that can take an excessive amount of time, hindering developer 
> productivity.
>  
> Proposal:
> We could add a command-line argument (e.g., --use-doc-cache) to the 
> `setup_docs.sh` script, which, when set, skips the cloning step if the 
> connector repositories have already been cloned previously. As a result, 
> developers can opt to use the cache when they do not require the latest 
> versions of the connectors' documentation. This change will reduce build 
> times significantly and improve the developer experience for those working on 
> the documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34362) Add argument to reuse connector docs cache in setup_docs.sh to improve build times

2024-02-25 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820552#comment-17820552
 ] 

Jane Chan commented on FLINK-34362:
---

Fixed in master a95b0fb75b6acc57e8cbde2847f26a1c870b03c0

> Add argument to reuse connector docs cache in setup_docs.sh to improve build 
> times
> --
>
> Key: FLINK-34362
> URL: https://issues.apache.org/jira/browse/FLINK-34362
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Assignee: Yunhong Zheng
>Priority: Minor
>  Labels: pull-request-available
>
> Problem:
> The current build process of Flink's documentation involves the 
> `setup_docs.sh` script, which re-clones connector repositories every time the 
> documentation is built. This operation is time-consuming, particularly for 
> developers in regions with slower internet connections or facing network 
> restrictions (like the Great Firewall in China). This results in a build 
> process that can take an excessive amount of time, hindering developer 
> productivity.
>  
> Proposal:
> We could add a command-line argument (e.g., --use-doc-cache) to the 
> `setup_docs.sh` script, which, when set, skips the cloning step if the 
> connector repositories have already been cloned previously. As a result, 
> developers can opt to use the cache when they do not require the latest 
> versions of the connectors' documentation. This change will reduce build 
> times significantly and improve the developer experience for those working on 
> the documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33397) FLIP-373: Support Configuring Different State TTLs using SQL Hint

2024-02-20 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819085#comment-17819085
 ] 

Jane Chan commented on FLINK-33397:
---

I've drafted a release note for this feature. cc [~lincoln.86xy] and 
[~xuyangzhong] 

> FLIP-373: Support Configuring Different State TTLs using SQL Hint
> -
>
> Key: FLINK-33397
> URL: https://issues.apache.org/jira/browse/FLINK-33397
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Assignee: xuyang
>Priority: Major
> Fix For: 1.19.0
>
>
> Please refer to 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-373%3A+Support+Configuring+Different+State+TTLs+using+SQL+Hint
>  
> |https://cwiki.apache.org/confluence/display/FLINK/FLIP-373%3A+Support+Configuring+Different+State+TTLs+using+SQL+Hint]
>  for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33397) FLIP-373: Support Configuring Different State TTLs using SQL Hint

2024-02-20 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-33397:
--
Release Note: 
This is a new feature in Apache Flink 1.19 that enhances the flexibility and 
user experience when managing SQL state time-to-live (TTL) settings. Users can 
now specify custom TTL values for regular joins and group aggregations directly 
within their queries by [utilizing the STATE_TTL 
hint](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/hints/#state-ttl-hints).

This improvement means that you no longer need to alter your compiled plan to 
set specific TTLs for these operators. With the introduction of STATE_TTL 
hints, you can streamline your workflow and dynamically adjust the TTL based on 
your operational requirements.

> FLIP-373: Support Configuring Different State TTLs using SQL Hint
> -
>
> Key: FLINK-33397
> URL: https://issues.apache.org/jira/browse/FLINK-33397
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Assignee: xuyang
>Priority: Major
> Fix For: 1.19.0
>
>
> Please refer to 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-373%3A+Support+Configuring+Different+State+TTLs+using+SQL+Hint
>  
> |https://cwiki.apache.org/confluence/display/FLINK/FLIP-373%3A+Support+Configuring+Different+State+TTLs+using+SQL+Hint]
>  for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-02-19 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818606#comment-17818606
 ] 

Jane Chan commented on FLINK-29114:
---

Hi [~mapohl], sorry for the late reply, I just noticed your message. I'll take 
a look now.

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
> Attachments: FLINK-29114.log
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
>     at 
> 

[jira] [Assigned] (FLINK-34381) `RelDataType#getFullTypeString` should be used to print in `RelTreeWriterImpl` if `withRowType` is true instead of `Object#toString`

2024-02-08 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-34381:
-

Assignee: xuyang

> `RelDataType#getFullTypeString` should be used to print in 
> `RelTreeWriterImpl` if `withRowType` is true instead of `Object#toString`
> 
>
> Key: FLINK-34381
> URL: https://issues.apache.org/jira/browse/FLINK-34381
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> Currently `RelTreeWriterImpl` use `rel.getRowType.toString` to print row type.
> {code:java}
> if (withRowType) {
>   s.append(", rowType=[").append(rel.getRowType.toString).append("]")
> } {code}
> However, looking deeper into the code, we should use 
> `rel.getRowType.getFullTypeString` to print the row type. Because the 
> function `getFullTypeString` will print richer type information such as 
> `nullable`. Take `StructuredRelDataType` as an example, the diff is below:
> {code:java}
> // source
> util.addTableSource[(Long, Int, String)]("MyTable", 'a, 'b, 'c)
> // sql
> SELECT a, c FROM MyTable
> // rel.getRowType.toString
> RecordType(BIGINT a, VARCHAR(2147483647) c)
> // rel.getRowType.getFullTypeString
> RecordType(BIGINT a, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" c) NOT 
> NULL{code}
>    



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34115) TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails

2024-02-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-34115.
-

> TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails
> --
>
> Key: FLINK-34115
> URL: https://issues.apache.org/jira/browse/FLINK-34115
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.18.2
>
>
> It failed twice in the same pipeline run:
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94=11613]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11963]
> {code:java}
>  Jan 14 01:20:01 01:20:01.949 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 29.07 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase
> Jan 14 01:20:01 01:20:01.949 [ERROR] 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate
>  -- Time elapsed: 0.518 s <<< FAILURE!
> Jan 14 01:20:01 org.opentest4j.AssertionFailedError: 
> Jan 14 01:20:01 
> Jan 14 01:20:01 expected: List((true,6,1), (false,6,1), (true,6,1), 
> (true,3,2), (false,6,1), (false,3,2), (true,6,1), (true,5,2), (false,6,1), 
> (false,5,2), (true,8,1), (true,6,2), (false,8,1), (false,6,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01  but was: List((true,3,1), (false,3,1), (true,5,1), 
> (true,3,2), (false,5,1), (false,3,2), (true,8,1), (true,5,2), (false,8,1), 
> (false,5,2), (true,8,1), (true,5,2), (false,8,1), (false,5,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jan 14 01:20:01   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.checkRank$1(TableAggregateITCase.scala:122)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate(TableAggregateITCase.scala:69)
> Jan 14 01:20:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Jan 14 01:20:01   at 
> scala.collection.convert.Wrappers$IteratorWrapper.forEachRemaining(Wrappers.scala:26)
> Jan 14 01:20:01   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> Jan 14 01:20:01   at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> 

[jira] [Commented] (FLINK-34115) TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails

2024-02-06 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815111#comment-17815111
 ] 

Jane Chan commented on FLINK-34115:
---

I think this issue can be closed now.

> TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails
> --
>
> Key: FLINK-34115
> URL: https://issues.apache.org/jira/browse/FLINK-34115
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.18.2
>
>
> It failed twice in the same pipeline run:
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94=11613]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11963]
> {code:java}
>  Jan 14 01:20:01 01:20:01.949 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 29.07 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase
> Jan 14 01:20:01 01:20:01.949 [ERROR] 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate
>  -- Time elapsed: 0.518 s <<< FAILURE!
> Jan 14 01:20:01 org.opentest4j.AssertionFailedError: 
> Jan 14 01:20:01 
> Jan 14 01:20:01 expected: List((true,6,1), (false,6,1), (true,6,1), 
> (true,3,2), (false,6,1), (false,3,2), (true,6,1), (true,5,2), (false,6,1), 
> (false,5,2), (true,8,1), (true,6,2), (false,8,1), (false,6,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01  but was: List((true,3,1), (false,3,1), (true,5,1), 
> (true,3,2), (false,5,1), (false,3,2), (true,8,1), (true,5,2), (false,8,1), 
> (false,5,2), (true,8,1), (true,5,2), (false,8,1), (false,5,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jan 14 01:20:01   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.checkRank$1(TableAggregateITCase.scala:122)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate(TableAggregateITCase.scala:69)
> Jan 14 01:20:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Jan 14 01:20:01   at 
> scala.collection.convert.Wrappers$IteratorWrapper.forEachRemaining(Wrappers.scala:26)
> Jan 14 01:20:01   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> Jan 14 01:20:01   at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 

[jira] [Resolved] (FLINK-34115) TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails

2024-02-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-34115.
---
Resolution: Fixed

> TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails
> --
>
> Key: FLINK-34115
> URL: https://issues.apache.org/jira/browse/FLINK-34115
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.18.2
>
>
> It failed twice in the same pipeline run:
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94=11613]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11963]
> {code:java}
>  Jan 14 01:20:01 01:20:01.949 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 29.07 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase
> Jan 14 01:20:01 01:20:01.949 [ERROR] 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate
>  -- Time elapsed: 0.518 s <<< FAILURE!
> Jan 14 01:20:01 org.opentest4j.AssertionFailedError: 
> Jan 14 01:20:01 
> Jan 14 01:20:01 expected: List((true,6,1), (false,6,1), (true,6,1), 
> (true,3,2), (false,6,1), (false,3,2), (true,6,1), (true,5,2), (false,6,1), 
> (false,5,2), (true,8,1), (true,6,2), (false,8,1), (false,6,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01  but was: List((true,3,1), (false,3,1), (true,5,1), 
> (true,3,2), (false,5,1), (false,3,2), (true,8,1), (true,5,2), (false,8,1), 
> (false,5,2), (true,8,1), (true,5,2), (false,8,1), (false,5,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jan 14 01:20:01   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.checkRank$1(TableAggregateITCase.scala:122)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate(TableAggregateITCase.scala:69)
> Jan 14 01:20:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Jan 14 01:20:01   at 
> scala.collection.convert.Wrappers$IteratorWrapper.forEachRemaining(Wrappers.scala:26)
> Jan 14 01:20:01   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> Jan 14 01:20:01   at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> 

[jira] [Closed] (FLINK-27539) support consuming update and delete changes In Windowing TVFs

2024-02-05 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-27539.
-

> support consuming update and delete changes In Windowing TVFs
> -
>
> Key: FLINK-27539
> URL: https://issues.apache.org/jira/browse/FLINK-27539
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.0
>Reporter: hjw
>Priority: Major
>
> custom_kafka is a cdc table
> sql:
> {code:java}
> select DATE_FORMAT(window_end,'-MM-dd') as date_str,sum(money) as 
> total,name
> from TABLE(CUMULATE(TABLE custom_kafka,descriptor(createtime),interval '1' 
> MINUTES,interval '1' DAY ))
> where status='1'
> group by name,window_start,window_end;
> {code}
> Error
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.TableException: 
> StreamPhysicalWindowAggregate doesn't support consuming update and delete 
> changes which is produced by node TableSourceScan(table=[[default_catalog, 
> default_database, custom_kafka, watermark=[-(createtime, 5000:INTERVAL 
> SECOND)]]], fields=[name, money, status, createtime, operation_ts])
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.createNewNode(FlinkChangelogModeInferenceProgram.scala:396)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:315)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visitChild(FlinkChangelogModeInferenceProgram.scala:353)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.$anonfun$visitChildren$1(FlinkChangelogModeInferenceProgram.scala:342)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.$anonfun$visitChildren$1$adapted(FlinkChangelogModeInferenceProgram.scala:341)
>  at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>  at scala.collection.immutable.Range.foreach(Range.scala:155)
>  at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>  at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visitChildren(FlinkChangelogModeInferenceProgram.scala:341)
> {code}
> But I found Group Window Aggregation is works when use cdc table
> {code:java}
> select DATE_FORMAT(TUMBLE_END(createtime,interval '10' MINUTES),'-MM-dd') 
> as date_str,sum(money) as total,name
> from custom_kafka
> where status='1'
> group by name,TUMBLE(createtime,interval '10' MINUTES)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-27539) support consuming update and delete changes In Windowing TVFs

2024-02-05 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-27539.
---
Resolution: Duplicate

> support consuming update and delete changes In Windowing TVFs
> -
>
> Key: FLINK-27539
> URL: https://issues.apache.org/jira/browse/FLINK-27539
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.0
>Reporter: hjw
>Priority: Major
>
> custom_kafka is a cdc table
> sql:
> {code:java}
> select DATE_FORMAT(window_end,'-MM-dd') as date_str,sum(money) as 
> total,name
> from TABLE(CUMULATE(TABLE custom_kafka,descriptor(createtime),interval '1' 
> MINUTES,interval '1' DAY ))
> where status='1'
> group by name,window_start,window_end;
> {code}
> Error
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.TableException: 
> StreamPhysicalWindowAggregate doesn't support consuming update and delete 
> changes which is produced by node TableSourceScan(table=[[default_catalog, 
> default_database, custom_kafka, watermark=[-(createtime, 5000:INTERVAL 
> SECOND)]]], fields=[name, money, status, createtime, operation_ts])
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.createNewNode(FlinkChangelogModeInferenceProgram.scala:396)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:315)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visitChild(FlinkChangelogModeInferenceProgram.scala:353)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.$anonfun$visitChildren$1(FlinkChangelogModeInferenceProgram.scala:342)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.$anonfun$visitChildren$1$adapted(FlinkChangelogModeInferenceProgram.scala:341)
>  at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>  at scala.collection.immutable.Range.foreach(Range.scala:155)
>  at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>  at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visitChildren(FlinkChangelogModeInferenceProgram.scala:341)
> {code}
> But I found Group Window Aggregation is works when use cdc table
> {code:java}
> select DATE_FORMAT(TUMBLE_END(createtime,interval '10' MINUTES),'-MM-dd') 
> as date_str,sum(money) as total,name
> from custom_kafka
> where status='1'
> group by name,TUMBLE(createtime,interval '10' MINUTES)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27539) support consuming update and delete changes In Windowing TVFs

2024-02-05 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814568#comment-17814568
 ] 

Jane Chan commented on FLINK-27539:
---

Hi [~martijnvisser], this issue duplicates FLINK-20281, and I think it can be 
closed now. 

> support consuming update and delete changes In Windowing TVFs
> -
>
> Key: FLINK-27539
> URL: https://issues.apache.org/jira/browse/FLINK-27539
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.0
>Reporter: hjw
>Priority: Major
>
> custom_kafka is a cdc table
> sql:
> {code:java}
> select DATE_FORMAT(window_end,'-MM-dd') as date_str,sum(money) as 
> total,name
> from TABLE(CUMULATE(TABLE custom_kafka,descriptor(createtime),interval '1' 
> MINUTES,interval '1' DAY ))
> where status='1'
> group by name,window_start,window_end;
> {code}
> Error
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.TableException: 
> StreamPhysicalWindowAggregate doesn't support consuming update and delete 
> changes which is produced by node TableSourceScan(table=[[default_catalog, 
> default_database, custom_kafka, watermark=[-(createtime, 5000:INTERVAL 
> SECOND)]]], fields=[name, money, status, createtime, operation_ts])
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.createNewNode(FlinkChangelogModeInferenceProgram.scala:396)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:315)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visitChild(FlinkChangelogModeInferenceProgram.scala:353)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.$anonfun$visitChildren$1(FlinkChangelogModeInferenceProgram.scala:342)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.$anonfun$visitChildren$1$adapted(FlinkChangelogModeInferenceProgram.scala:341)
>  at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>  at scala.collection.immutable.Range.foreach(Range.scala:155)
>  at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>  at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyModifyKindSetTraitVisitor.visitChildren(FlinkChangelogModeInferenceProgram.scala:341)
> {code}
> But I found Group Window Aggregation is works when use cdc table
> {code:java}
> select DATE_FORMAT(TUMBLE_END(createtime,interval '10' MINUTES),'-MM-dd') 
> as date_str,sum(money) as total,name
> from custom_kafka
> where status='1'
> group by name,TUMBLE(createtime,interval '10' MINUTES)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34362) Add argument to reuse connector docs cache in setup_docs.sh to improve build times

2024-02-05 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-34362:
-

Assignee: Yunhong Zheng

> Add argument to reuse connector docs cache in setup_docs.sh to improve build 
> times
> --
>
> Key: FLINK-34362
> URL: https://issues.apache.org/jira/browse/FLINK-34362
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Assignee: Yunhong Zheng
>Priority: Minor
>
> Problem:
> The current build process of Flink's documentation involves the 
> `setup_docs.sh` script, which re-clones connector repositories every time the 
> documentation is built. This operation is time-consuming, particularly for 
> developers in regions with slower internet connections or facing network 
> restrictions (like the Great Firewall in China). This results in a build 
> process that can take an excessive amount of time, hindering developer 
> productivity.
>  
> Proposal:
> We could add a command-line argument (e.g., --use-doc-cache) to the 
> `setup_docs.sh` script, which, when set, skips the cloning step if the 
> connector repositories have already been cloned previously. As a result, 
> developers can opt to use the cache when they do not require the latest 
> versions of the connectors' documentation. This change will reduce build 
> times significantly and improve the developer experience for those working on 
> the documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34362) Add argument to reuse connector docs cache in setup_docs.sh to improve build times

2024-02-05 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814567#comment-17814567
 ] 

Jane Chan commented on FLINK-34362:
---

Hi [~martijnvisser], +1 to not use cache by default. Hi [~337361...@qq.com], 
thanks for the volunteering. The ticket is assigned to you. 

> Add argument to reuse connector docs cache in setup_docs.sh to improve build 
> times
> --
>
> Key: FLINK-34362
> URL: https://issues.apache.org/jira/browse/FLINK-34362
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Priority: Minor
>
> Problem:
> The current build process of Flink's documentation involves the 
> `setup_docs.sh` script, which re-clones connector repositories every time the 
> documentation is built. This operation is time-consuming, particularly for 
> developers in regions with slower internet connections or facing network 
> restrictions (like the Great Firewall in China). This results in a build 
> process that can take an excessive amount of time, hindering developer 
> productivity.
>  
> Proposal:
> We could add a command-line argument (e.g., --use-doc-cache) to the 
> `setup_docs.sh` script, which, when set, skips the cloning step if the 
> connector repositories have already been cloned previously. As a result, 
> developers can opt to use the cache when they do not require the latest 
> versions of the connectors' documentation. This change will reduce build 
> times significantly and improve the developer experience for those working on 
> the documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34362) Add argument to reuse connector docs cache in setup_docs.sh to improve build times

2024-02-05 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814310#comment-17814310
 ] 

Jane Chan commented on FLINK-34362:
---

Cc [~martijnvisser], I'd like to hear your opinion.

> Add argument to reuse connector docs cache in setup_docs.sh to improve build 
> times
> --
>
> Key: FLINK-34362
> URL: https://issues.apache.org/jira/browse/FLINK-34362
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: Jane Chan
>Priority: Minor
>
> Problem:
> The current build process of Flink's documentation involves the 
> `setup_docs.sh` script, which re-clones connector repositories every time the 
> documentation is built. This operation is time-consuming, particularly for 
> developers in regions with slower internet connections or facing network 
> restrictions (like the Great Firewall in China). This results in a build 
> process that can take an excessive amount of time, hindering developer 
> productivity.
>  
> Proposal:
> We could add a command-line argument (e.g., --use-doc-cache) to the 
> `setup_docs.sh` script, which, when set, skips the cloning step if the 
> connector repositories have already been cloned previously. As a result, 
> developers can opt to use the cache when they do not require the latest 
> versions of the connectors' documentation. This change will reduce build 
> times significantly and improve the developer experience for those working on 
> the documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34362) Add argument to reuse connector docs cache in setup_docs.sh to improve build times

2024-02-05 Thread Jane Chan (Jira)
Jane Chan created FLINK-34362:
-

 Summary: Add argument to reuse connector docs cache in 
setup_docs.sh to improve build times
 Key: FLINK-34362
 URL: https://issues.apache.org/jira/browse/FLINK-34362
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.19.0
Reporter: Jane Chan


Problem:
The current build process of Flink's documentation involves the `setup_docs.sh` 
script, which re-clones connector repositories every time the documentation is 
built. This operation is time-consuming, particularly for developers in regions 
with slower internet connections or facing network restrictions (like the Great 
Firewall in China). This results in a build process that can take an excessive 
amount of time, hindering developer productivity.

 

Proposal:

We could add a command-line argument (e.g., --use-doc-cache) to the 
`setup_docs.sh` script, which, when set, skips the cloning step if the 
connector repositories have already been cloned previously. As a result, 
developers can opt to use the cache when they do not require the latest 
versions of the connectors' documentation. This change will reduce build times 
significantly and improve the developer experience for those working on the 
documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34323) Session window tvf failed when using named params

2024-02-04 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-34323.
-

> Session window tvf failed when using named params
> -
>
> Key: FLINK-34323
> URL: https://issues.apache.org/jira/browse/FLINK-34323
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34323) Session window tvf failed when using named params

2024-02-04 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-34323.
---
Resolution: Fixed

> Session window tvf failed when using named params
> -
>
> Key: FLINK-34323
> URL: https://issues.apache.org/jira/browse/FLINK-34323
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34323) Session window tvf failed when using named params

2024-02-04 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814141#comment-17814141
 ] 

Jane Chan commented on FLINK-34323:
---

Fixed in master 6be30b167990c22765c244a703ab0424e7c3b4d9

> Session window tvf failed when using named params
> -
>
> Key: FLINK-34323
> URL: https://issues.apache.org/jira/browse/FLINK-34323
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34323) Session window tvf failed when using named params

2024-02-04 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-34323:
-

Assignee: xuyang

> Session window tvf failed when using named params
> -
>
> Key: FLINK-34323
> URL: https://issues.apache.org/jira/browse/FLINK-34323
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34313) Update doc about session window tvf

2024-02-04 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-34313.
---
Resolution: Fixed

> Update doc about session window tvf
> ---
>
> Key: FLINK-34313
> URL: https://issues.apache.org/jira/browse/FLINK-34313
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34313) Update doc about session window tvf

2024-02-04 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-34313.
-

> Update doc about session window tvf
> ---
>
> Key: FLINK-34313
> URL: https://issues.apache.org/jira/browse/FLINK-34313
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34313) Update doc about session window tvf

2024-02-04 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-34313:
-

Assignee: xuyang

> Update doc about session window tvf
> ---
>
> Key: FLINK-34313
> URL: https://issues.apache.org/jira/browse/FLINK-34313
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34313) Update doc about session window tvf

2024-02-04 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814106#comment-17814106
 ] 

Jane Chan commented on FLINK-34313:
---

Fixed in master eff073ff1199acf0f26a0f04ede7d692837301c3

> Update doc about session window tvf
> ---
>
> Key: FLINK-34313
> URL: https://issues.apache.org/jira/browse/FLINK-34313
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34034) When kv hint and list hint handle duplicate query hints, the results are different.

2024-02-04 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-34034:
--
Affects Version/s: 1.18.1
   1.18.0
   1.19.0

> When kv hint and list hint handle duplicate query hints, the results are 
> different.
> ---
>
> Key: FLINK-34034
> URL: https://issues.apache.org/jira/browse/FLINK-34034
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0, 1.19.0, 1.18.1
>Reporter: xuyang
>Assignee: Yunhong Zheng
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> When there are duplicate keys in the kv hint, calcite will overwrite the 
> previous value with the later value.
> {code:java}
> @TestTemplate
> def test(): Unit = {
>   val sql =
> "SELECT /*+ LOOKUP('table'='D', 'retry-predicate'='lookup_miss', 
> 'retry-strategy'='fixed_delay', 'fixed-delay'='10s','max-attempts'='3', 
> 'max-attempts'='4') */ * FROM MyTable AS T JOIN LookupTable " +
>   "FOR SYSTEM_TIME AS OF T.proctime AS D ON T.a = D.id"
>   util.verifyExecPlan(sql)
> } {code}
> {code:java}
> Calc(select=[a, b, c, PROCTIME_MATERIALIZE(proctime) AS proctime, rowtime, 
> id, name, age]) 
>   +- LookupJoin(table=[default_catalog.default_database.LookupTable], 
> joinType=[InnerJoin], lookup=[id=a], select=[a, b, c, proctime, rowtime, id, 
> name, age], retry=[lookup_miss, FIXED_DELAY, 1ms, 4]) 
> +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], 
> fields=[a, b, c, proctime, rowtime])
> {code}
> But when a list hint is duplicated (such as a join hint), we will choose the 
> first one as the effective hint.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34034) When kv hint and list hint handle duplicate query hints, the results are different.

2024-02-04 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-34034.
-

> When kv hint and list hint handle duplicate query hints, the results are 
> different.
> ---
>
> Key: FLINK-34034
> URL: https://issues.apache.org/jira/browse/FLINK-34034
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0, 1.19.0, 1.18.1
>Reporter: xuyang
>Assignee: Yunhong Zheng
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> When there are duplicate keys in the kv hint, calcite will overwrite the 
> previous value with the later value.
> {code:java}
> @TestTemplate
> def test(): Unit = {
>   val sql =
> "SELECT /*+ LOOKUP('table'='D', 'retry-predicate'='lookup_miss', 
> 'retry-strategy'='fixed_delay', 'fixed-delay'='10s','max-attempts'='3', 
> 'max-attempts'='4') */ * FROM MyTable AS T JOIN LookupTable " +
>   "FOR SYSTEM_TIME AS OF T.proctime AS D ON T.a = D.id"
>   util.verifyExecPlan(sql)
> } {code}
> {code:java}
> Calc(select=[a, b, c, PROCTIME_MATERIALIZE(proctime) AS proctime, rowtime, 
> id, name, age]) 
>   +- LookupJoin(table=[default_catalog.default_database.LookupTable], 
> joinType=[InnerJoin], lookup=[id=a], select=[a, b, c, proctime, rowtime, id, 
> name, age], retry=[lookup_miss, FIXED_DELAY, 1ms, 4]) 
> +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], 
> fields=[a, b, c, proctime, rowtime])
> {code}
> But when a list hint is duplicated (such as a join hint), we will choose the 
> first one as the effective hint.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34034) When kv hint and list hint handle duplicate query hints, the results are different.

2024-02-04 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-34034:
-

Assignee: Yunhong Zheng  (was: xuyang)

> When kv hint and list hint handle duplicate query hints, the results are 
> different.
> ---
>
> Key: FLINK-34034
> URL: https://issues.apache.org/jira/browse/FLINK-34034
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyang
>Assignee: Yunhong Zheng
>Priority: Minor
>  Labels: pull-request-available
>
> When there are duplicate keys in the kv hint, calcite will overwrite the 
> previous value with the later value.
> {code:java}
> @TestTemplate
> def test(): Unit = {
>   val sql =
> "SELECT /*+ LOOKUP('table'='D', 'retry-predicate'='lookup_miss', 
> 'retry-strategy'='fixed_delay', 'fixed-delay'='10s','max-attempts'='3', 
> 'max-attempts'='4') */ * FROM MyTable AS T JOIN LookupTable " +
>   "FOR SYSTEM_TIME AS OF T.proctime AS D ON T.a = D.id"
>   util.verifyExecPlan(sql)
> } {code}
> {code:java}
> Calc(select=[a, b, c, PROCTIME_MATERIALIZE(proctime) AS proctime, rowtime, 
> id, name, age]) 
>   +- LookupJoin(table=[default_catalog.default_database.LookupTable], 
> joinType=[InnerJoin], lookup=[id=a], select=[a, b, c, proctime, rowtime, id, 
> name, age], retry=[lookup_miss, FIXED_DELAY, 1ms, 4]) 
> +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], 
> fields=[a, b, c, proctime, rowtime])
> {code}
> But when a list hint is duplicated (such as a join hint), we will choose the 
> first one as the effective hint.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   >