[jira] [Commented] (FLINK-31756) KafkaTableITCase.testStartFromGroupOffsetsNone fails due to UnknownTopicOrPartitionException

2023-04-08 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709939#comment-17709939
 ] 

Sergey Nuyanzin commented on FLINK-31756:
-

similar to https://issues.apache.org/jira/browse/FLINK-30298

> KafkaTableITCase.testStartFromGroupOffsetsNone fails due to 
> UnknownTopicOrPartitionException
> 
>
> Key: FLINK-31756
> URL: https://issues.apache.org/jira/browse/FLINK-31756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.17.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
>
> The following build fails with {{UnknownTopicOrPartitionException}}
> {noformat}
> Dec 03 01:10:59 Multiple Failures (1 failure)
> Dec 03 01:10:59 -- failure 1 --
> Dec 03 01:10:59 [Any cause is instance of class 'class 
> org.apache.kafka.clients.consumer.NoOffsetForPartitionException'] 
> Dec 03 01:10:59 Expecting any element of:
> Dec 03 01:10:59   [java.lang.IllegalStateException: Fail to create topic 
> [groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
> replication factor: 1].
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.createTestTopic(KafkaTableTestBase.java:143)
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.startFromGroupOffset(KafkaTableITCase.java:881)
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testStartFromGroupOffsetsWithNoneResetStrategy(KafkaTableITCase.java:981)
> Dec 03 01:10:59   ...(64 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed),
> Dec 03 01:10:59 java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: The request timed out.
> Dec 03 01:10:59   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
> Dec 03 01:10:59   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
> Dec 03 01:10:59   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
> Dec 03 01:10:59   ...(67 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed),
> Dec 03 01:10:59 org.apache.kafka.common.errors.TimeoutException: The 
> request timed out.
> Dec 03 01:10:59 ]
> Dec 03 01:10:59 to satisfy the given assertions requirements but none did:
> Dec 03 01:10:59 
> Dec 03 01:10:59 java.lang.IllegalStateException: Fail to create topic 
> [groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
> replication factor: 1].
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.createTestTopic(KafkaTableTestBase.java:143)
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.startFromGroupOffset(KafkaTableITCase.java:881)
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testStartFromGroupOffsetsWithNoneResetStrategy(KafkaTableITCase.java:981)
> Dec 03 01:10:59   ...(64 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed)
> Dec 03 01:10:59 error: 
> Dec 03 01:10:59 Expecting actual throwable to be an instance of:
> Dec 03 01:10:59   
> org.apache.kafka.clients.consumer.NoOffsetForPartitionException
> Dec 03 01:10:59 but was:
> Dec 03 01:10:59   java.lang.IllegalStateException: Fail to create topic 
> [groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
> replication factor: 1].
> [...]
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47892=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203=36657



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31756) KafkaTableITCase.testStartFromGroupOffsetsNone fails due to UnknownTopicOrPartitionException

2023-04-08 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-31756:
---

 Summary: KafkaTableITCase.testStartFromGroupOffsetsNone fails due 
to UnknownTopicOrPartitionException
 Key: FLINK-31756
 URL: https://issues.apache.org/jira/browse/FLINK-31756
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.17.0
Reporter: Sergey Nuyanzin


The following build fails with {{UnknownTopicOrPartitionException}}
{noformat}
Dec 03 01:10:59 Multiple Failures (1 failure)
Dec 03 01:10:59 -- failure 1 --
Dec 03 01:10:59 [Any cause is instance of class 'class 
org.apache.kafka.clients.consumer.NoOffsetForPartitionException'] 
Dec 03 01:10:59 Expecting any element of:
Dec 03 01:10:59   [java.lang.IllegalStateException: Fail to create topic 
[groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
replication factor: 1].
Dec 03 01:10:59 at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.createTestTopic(KafkaTableTestBase.java:143)
Dec 03 01:10:59 at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.startFromGroupOffset(KafkaTableITCase.java:881)
Dec 03 01:10:59 at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testStartFromGroupOffsetsWithNoneResetStrategy(KafkaTableITCase.java:981)
Dec 03 01:10:59 ...(64 remaining lines not displayed - this can be 
changed with Assertions.setMaxStackTraceElementsDisplayed),
Dec 03 01:10:59 java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out.
Dec 03 01:10:59 at 
java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
Dec 03 01:10:59 at 
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
Dec 03 01:10:59 at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
Dec 03 01:10:59 ...(67 remaining lines not displayed - this can be 
changed with Assertions.setMaxStackTraceElementsDisplayed),
Dec 03 01:10:59 org.apache.kafka.common.errors.TimeoutException: The 
request timed out.
Dec 03 01:10:59 ]
Dec 03 01:10:59 to satisfy the given assertions requirements but none did:
Dec 03 01:10:59 
Dec 03 01:10:59 java.lang.IllegalStateException: Fail to create topic 
[groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
replication factor: 1].
Dec 03 01:10:59 at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.createTestTopic(KafkaTableTestBase.java:143)
Dec 03 01:10:59 at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.startFromGroupOffset(KafkaTableITCase.java:881)
Dec 03 01:10:59 at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testStartFromGroupOffsetsWithNoneResetStrategy(KafkaTableITCase.java:981)
Dec 03 01:10:59 ...(64 remaining lines not displayed - this can be 
changed with Assertions.setMaxStackTraceElementsDisplayed)
Dec 03 01:10:59 error: 
Dec 03 01:10:59 Expecting actual throwable to be an instance of:
Dec 03 01:10:59   
org.apache.kafka.clients.consumer.NoOffsetForPartitionException
Dec 03 01:10:59 but was:
Dec 03 01:10:59   java.lang.IllegalStateException: Fail to create topic 
[groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
replication factor: 1].
[...]

{noformat}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47892=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203=36657



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26402) MinioTestContainerTest.testS3EndpointNeedsToBeSpecifiedBeforeInitializingFileSyste failed due to Container startup failed

2023-04-08 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709937#comment-17709937
 ] 

Sergey Nuyanzin commented on FLINK-26402:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47931=logs=7e3d33c3-a462-5ea8-98b8-27e1aafe4ceb=ef77f8d1-44c8-5ee2-f175-1c88f61de8c0=14065

> MinioTestContainerTest.testS3EndpointNeedsToBeSpecifiedBeforeInitializingFileSyste
>  failed due to Container startup failed
> -
>
> Key: FLINK-26402
> URL: https://issues.apache.org/jira/browse/FLINK-26402
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.15.0, 1.16.0, 1.17.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
>
> {code:java}
> 2022-02-24T02:49:59.3646340Z Feb 24 02:49:59 [ERROR] Tests run: 6, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 49.457 s <<< FAILURE! - in 
> org.apache.flink.fs.s3.common.MinioTestContainerTest
> 2022-02-24T02:49:59.3648027Z Feb 24 02:49:59 [ERROR] 
> org.apache.flink.fs.s3.common.MinioTestContainerTest.testS3EndpointNeedsToBeSpecifiedBeforeInitializingFileSyste
>   Time elapsed: 5.751 s  <<< ERROR!
> 2022-02-24T02:49:59.3648805Z Feb 24 02:49:59 
> org.testcontainers.containers.ContainerLaunchException: Container startup 
> failed
> 2022-02-24T02:49:59.3651640Z Feb 24 02:49:59  at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:336)
> 2022-02-24T02:49:59.3652820Z Feb 24 02:49:59  at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:317)
> 2022-02-24T02:49:59.3653619Z Feb 24 02:49:59  at 
> org.apache.flink.core.testutils.TestContainerExtension.instantiateTestContainer(TestContainerExtension.java:59)
> 2022-02-24T02:49:59.3654319Z Feb 24 02:49:59  at 
> org.apache.flink.core.testutils.TestContainerExtension.before(TestContainerExtension.java:70)
> 2022-02-24T02:49:59.3655057Z Feb 24 02:49:59  at 
> org.apache.flink.core.testutils.EachCallbackWrapper.beforeEach(EachCallbackWrapper.java:45)
> 2022-02-24T02:49:59.3656153Z Feb 24 02:49:59  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachCallbacks$2(TestMethodTestDescriptor.java:163)
> 2022-02-24T02:49:59.3657088Z Feb 24 02:49:59  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:199)
> 2022-02-24T02:49:59.3657905Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2022-02-24T02:49:59.3659016Z Feb 24 02:49:59  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:199)
> 2022-02-24T02:49:59.3660004Z Feb 24 02:49:59  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeEachCallbacks(TestMethodTestDescriptor.java:162)
> 2022-02-24T02:49:59.3660997Z Feb 24 02:49:59  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:129)
> 2022-02-24T02:49:59.3662153Z Feb 24 02:49:59  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
> 2022-02-24T02:49:59.3663189Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
> 2022-02-24T02:49:59.3664211Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2022-02-24T02:49:59.3664971Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
> 2022-02-24T02:49:59.3665623Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> 2022-02-24T02:49:59.3666433Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
> 2022-02-24T02:49:59.3667322Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2022-02-24T02:49:59.3668024Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> 2022-02-24T02:49:59.3669276Z Feb 24 02:49:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> 2022-02-24T02:49:59.3669881Z Feb 24 02:49:59  at 
> java.util.ArrayList.forEach(ArrayList.java:1259)
> 2022-02-24T02:49:59.3670715Z Feb 24 02:49:59  at 
> 

[GitHub] [flink] luoyuxia commented on pull request #22301: [FLINK-31426][table] Upgrade the deprecated UniqueConstraint to the n…

2023-04-08 Thread via GitHub


luoyuxia commented on PR #22301:
URL: https://github.com/apache/flink/pull/22301#issuecomment-1501027828

   @clownxc Thanks for contribution. I'll have a look when i'm free


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] slfan1989 commented on pull request #22207: [FLINK-31510][yarn] Use getMemorySize instead of getMemory.

2023-04-08 Thread via GitHub


slfan1989 commented on PR #22207:
URL: https://github.com/apache/flink/pull/22207#issuecomment-1501007098

   > Thanks @slfan1989 for the update, would you mind squashing all to only one 
commit with the message like:
   
   @reswqa Thank you for your help to review the code! I will rebase and submit.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wuchong closed pull request #22254: [FLINK-31597][table] Cleanup usage of deprecated TableEnvironment#registerFunction

2023-04-08 Thread via GitHub


wuchong closed pull request #22254: [FLINK-31597][table] Cleanup usage of 
deprecated TableEnvironment#registerFunction
URL: https://github.com/apache/flink/pull/22254


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wuchong commented on pull request #22254: [FLINK-31597][table] Cleanup usage of deprecated TableEnvironment#registerFunction

2023-04-08 Thread via GitHub


wuchong commented on PR #22254:
URL: https://github.com/apache/flink/pull/22254#issuecomment-1500915644

   I tried to cleanup the registerFunction usage. But it seems it is a big 
effort because of the different behavior between `registerFunction` and 
`createTemporarySystemFunction`, e.g., the error message, the checks of 
nullability types, the plan digests, etc. I would like to close this PR first, 
and may separate it into sub-tasks later. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31754) Build flink master error with Error in ASM processing class org/apache/calcite/sql/validate/SqlValidatorImpl$NavigationExpander.class: 19

2023-04-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-31754:

Description: 
maven 3.25

jdk 1.8

scala 2.12

window 10

[E:\Project\flink\flink\flink-table\flink-table-planner]$ mvn package 
-DskipTests -e

 
{code}
[INFO] Error stacktraces are turned on.[INFO] Scanning for projects...[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.flink:flink-table-planner_2.12:jar:1.18-SNAPSHOT
[WARNING] 'artifactId' contains an expression but should be a constant. @ 
org.apache.flink:flink-table-planner_${scala.binary.version}:[unknown-version], 
E:\Project\flink\flink\flink-table\flink-table-planner\pom.xml, line 29, column 
14
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten 
the stability of your build.
[WARNING] 
[WARNING] For this reason, future Maven versions might no longer support 
building such malformed projects.
[WARNING] 
[INFO]                                                                         
[INFO] 
[INFO] Building Flink : Table : Planner 1.18-SNAPSHOT
[INFO] 
[INFO] 
[INFO] — maven-checkstyle-plugin:3.1.2:check (validate) @ 
flink-table-planner_2.12 —
[WARNING] Old version of checkstyle detected. Consider updating to >= v8.30
[WARNING] For more information see: 
[https://maven.apache.org/plugins/maven-checkstyle-plugin/examples/upgrading-checkstyle.html]
[INFO] You have 0 Checkstyle violations.
[INFO] 
[INFO] — spotless-maven-plugin:2.27.1:check (spotless-check) @ 
flink-table-planner_2.12 —
[INFO] 
[INFO] — maven-enforcer-plugin:3.1.0:enforce (enforce-maven-version) @ 
flink-table-planner_2.12 —
[INFO] 
[INFO] — maven-enforcer-plugin:3.1.0:enforce (enforce-maven) @ 
flink-table-planner_2.12 —
[INFO] 
[INFO] — maven-enforcer-plugin:3.1.0:enforce (ban-unsafe-snakeyaml) @ 
flink-table-planner_2.12 —
[INFO] 
[INFO] — maven-enforcer-plugin:3.1.0:enforce (ban-unsafe-jackson) @ 
flink-table-planner_2.12 —
[INFO] 
[INFO] — maven-enforcer-plugin:3.1.0:enforce (forbid-log4j-1) @ 
flink-table-planner_2.12 —
[INFO] 
[INFO] — maven-enforcer-plugin:3.1.0:enforce 
(forbid-direct-akka-rpc-dependencies) @ flink-table-planner_2.12 —
[INFO] 
[INFO] — maven-enforcer-plugin:3.1.0:enforce 
(forbid-direct-table-planner-dependencies) @ flink-table-planner_2.12 —
[INFO] 
[INFO] — maven-enforcer-plugin:3.1.0:enforce (enforce-versions) @ 
flink-table-planner_2.12 —
[INFO] 
[INFO] — directory-maven-plugin:0.1:directory-of (directories) @ 
flink-table-planner_2.12 —
[INFO] Directory of org.apache.flink:flink-parent set to: E:\Project\flink\flink
[INFO] 
[INFO] — maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ 
flink-table-planner_2.12 —
[INFO] 
[INFO] — maven-resources-plugin:3.1.0:resources (default-resources) @ 
flink-table-planner_2.12 —
[INFO] Using 'UTF-8' encoding to copy filtered resources.[INFO] Copying 1 
resource[INFO] Copying 3 resources[INFO] 
[INFO] — scala-maven-plugin:3.2.2:add-source (scala-compile-first) @ 
flink-table-planner_2.12 —
[INFO] Add Source directory: 
E:\Project\flink\flink\flink-table\flink-table-planner\src\main\scala
[INFO] Add Test Source directory: 
E:\Project\flink\flink\flink-table\flink-table-planner\src\test\scala
[INFO] 
[INFO] — scala-maven-plugin:3.2.2:compile (scala-compile-first) @ 
flink-table-planner_2.12 —
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] — maven-compiler-plugin:3.8.0:compile (default-compile) @ 
flink-table-planner_2.12 —
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] — maven-resources-plugin:3.1.0:testResources (default-testResources) @ 
flink-table-planner_2.12 —
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 371 resources[INFO] Copying 3 resources[INFO] 
[INFO] — scala-maven-plugin:3.2.2:testCompile (scala-test-compile) @ 
flink-table-planner_2.12 —
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] — maven-compiler-plugin:3.8.0:testCompile (default-testCompile) @ 
flink-table-planner_2.12 —
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] — maven-surefire-plugin:3.0.0-M5:test (default-test) @ 
flink-table-planner_2.12 —
[INFO] Tests are skipped.
[INFO] 
[INFO] — maven-jar-plugin:2.4:jar (default-jar) @ flink-table-planner_2.12 —
[INFO] Building jar: 
E:\Project\flink\flink\flink-table\flink-table-planner\target\flink-table-planner_2.12-1.18-SNAPSHOT.jar[INFO]
 
[INFO] — maven-jar-plugin:2.4:test-jar (default) @ flink-table-planner_2.12 —
[INFO] Building jar: 
E:\Project\flink\flink\flink-table\flink-table-planner\target\flink-table-planner_2.12-1.18-SNAPSHOT-tests.jar
[INFO] 
[INFO] — maven-shade-plugin:3.4.1:shade (shade-flink) @ 
flink-table-planner_2.12 —
[INFO] 

[jira] [Commented] (FLINK-31629) Trying to access closed classloader when submit query to restSqlGateway via SqlClient

2023-04-08 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709898#comment-17709898
 ] 

Weijie Guo commented on FLINK-31629:


By offline discuss with [~luoyuxia], this issue can be fixed by FLINK-31398. 
Let's track this issue on that ticket.

 

> Trying to access closed classloader when submit query to restSqlGateway via 
> SqlClient
> -
>
> Key: FLINK-31629
> URL: https://issues.apache.org/jira/browse/FLINK-31629
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
> Attachments: screenshot-1.png
>
>
> When I attempted to resubmit the same SQL job(Using HiveCatalog) to 
> SqlGateway through SqlClient, I encountered the error shown in the figure.
> !screenshot-1.png|width=649,height=263!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31686) Filesystem connector should replace the shallow copy with deep copy

2023-04-08 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709887#comment-17709887
 ] 

Jark Wu commented on FLINK-31686:
-

I think you are right. The current implementation has some problems. The root 
cause is {{DecodingFormat}} doesn't support {{copy()}}, which makes the 
DecodingFormat resued after filter/projection is pushed down. 

Therefore, we need to first come up with a new API for 
{{DecodingFormat#copy()}} which may need a public discussion. 

What do you think [~luoyuxia] [~lincoln.86xy] [~twalthr]? 

> Filesystem connector should replace the shallow copy with deep copy
> ---
>
> Key: FLINK-31686
> URL: https://issues.apache.org/jira/browse/FLINK-31686
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.16.1
>Reporter: tanjialiang
>Priority: Major
> Attachments: image-2023-04-01-16-18-48-762.png, 
> image-2023-04-01-16-18-56-075.png
>
>
> Hi team, when i using the following sql
> {code:java}
> CREATE TABLE student (
>     `id` STRING,
>     `name` STRING,
>     `age` INT
> ) WITH (
>   'connector' = 'filesystem',
>   'path' = '...',
>   'format' = 'orc'
> );
> select
>     t1.total,
>     t2.total
> from
>     (
>         select
>             count(*) as total,
>             1 as join_key
>         from student
>         where name = 'tanjialiang'
>     ) t1
>     LEFT JOIN (
>         select
>             count(*) as total,
>             1 as join_key
>         from student;
>     ) t2 
>     ON t1.join_key = t2.join_key; {code}
>  
> it will throw an error
> !image-2023-04-01-16-18-48-762.png!
>  
> I tried to solve it, and i found filesystem connector's copy function using a 
> shallow copy instread of deep copy.   It lead to all of query from a same 
> table source reuse the same bulkWriterFormat, and my query have filter 
> condition, which will push down into the bulkWriterFormat, so the filter 
> condition maybe reuse.   
> I found the DynamicTableSource and DynamicTableSink's copy function comment 
> to ask we should impletement it with deep copy, but i found every connector 
> are using shallow copy to impletement it.   So i think not only the 
> filesystem connector have this problem.
> !image-2023-04-01-16-18-56-075.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29541) [JUnit5 Migration] Module: flink-table-planner

2023-04-08 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709884#comment-17709884
 ] 

Jark Wu commented on FLINK-29541:
-

Considering migrating for flink-table-planner is a huge effort, I converted 
this issue from sub-task into an umbrella issue. [~rskraba] feel free to create 
other sub-issues under it when you finish the {{BatchAbstractTestBase}}. 

> [JUnit5 Migration] Module: flink-table-planner
> --
>
> Key: FLINK-29541
> URL: https://issues.apache.org/jira/browse/FLINK-29541
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Planner, Tests
>Reporter: Lijie Wang
>Assignee: Ryan Skraba
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31674) [JUnit5 Migration] Module: flink-table-planner (BatchAbstractTestBase)

2023-04-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-31674:

Parent Issue: FLINK-29541  (was: FLINK-25325)

> [JUnit5 Migration] Module: flink-table-planner (BatchAbstractTestBase)
> --
>
> Key: FLINK-31674
> URL: https://issues.apache.org/jira/browse/FLINK-31674
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Ryan Skraba
>Assignee: Ryan Skraba
>Priority: Major
>
> This is one sub-subtask related to the flink-table-planner migration 
> (FLINK-29541).
> While most of the JUnit migrations tasks are done by modules, a number of 
> abstract test classes in flink-table-planner have large hierarchies that 
> cross module boundaries.  This task is to migrate all of the tests that 
> depend on {{BatchAbstractTestBase}} to JUnit5.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29541) [JUnit5 Migration] Module: flink-table-planner

2023-04-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-29541:

Parent: (was: FLINK-25325)
Issue Type: Technical Debt  (was: Sub-task)

> [JUnit5 Migration] Module: flink-table-planner
> --
>
> Key: FLINK-29541
> URL: https://issues.apache.org/jira/browse/FLINK-29541
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Planner, Tests
>Reporter: Lijie Wang
>Assignee: Ryan Skraba
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-6757) Investigate Apache Atlas integration

2023-04-08 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709883#comment-17709883
 ] 

Jark Wu commented on FLINK-6757:


Hi [~litiliu], [~fangyong] is working on FLIP-294[1], which is designed to 
support lineage integration. Feel free to join the discussion (not started the 
discussion on dev ML yet). 

[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-294%3A+Support+Customized+Job+Status+Listener

> Investigate Apache Atlas integration
> 
>
> Key: FLINK-6757
> URL: https://issues.apache.org/jira/browse/FLINK-6757
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> Users asked for an integration of Apache Flink with Apache Atlas. It might be 
> worthwhile to investigate what is necessary to achieve this task.
> References:
> http://atlas.incubator.apache.org/StormAtlasHook.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31661) Add parity between `ROW` value function and it's type declaration

2023-04-08 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709881#comment-17709881
 ] 

Jark Wu commented on FLINK-31661:
-

This can be a big effort because we need to contribute this feature to the 
upstream project Calcite that the Calcite parser handles the ROW expression. In 
addition, we need to investigate how/whether other mature database systems 
support this. 

> Add parity between `ROW` value function and it's type declaration
> -
>
> Key: FLINK-31661
> URL: https://issues.apache.org/jira/browse/FLINK-31661
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Mohsen Rezaei
>Priority: Critical
>
> Currently the [{{ROW}} table 
> type|https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/types/#row]
>  allows for a name and type, and optionally a description, but [its value 
> constructing 
> function|https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/types/#row]
>  only supports an arbitrary list of expressions.
> This prevents users from providing human-readable names for the fields 
> provded to a {{ROW()}} or {{()}} value function call, resulting in 
> system-defined {{EXPR$n}} names that lose their meaning as they are mixed in 
> with other queries.
> For example, the following SQL query:
> {code}
> SELECT (id, name) as struct FROM t1;
> {code}
> results in the following consumable data type for the `ROW` column:
> {code}
> ROW<`EXPR$0` DECIMAL(10, 2), `EXPR$1` STRING> NOT NULL
> {code}
> I'd be happy to contribute to this change, but I need some guidance and 
> pointers on where to start making changes for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31661) Add parity between `ROW` value function and it's type declaration

2023-04-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-31661:

Component/s: Table SQL / API
 (was: API / DataSet)

> Add parity between `ROW` value function and it's type declaration
> -
>
> Key: FLINK-31661
> URL: https://issues.apache.org/jira/browse/FLINK-31661
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Mohsen Rezaei
>Priority: Critical
>
> Currently the [{{ROW}} table 
> type|https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/types/#row]
>  allows for a name and type, and optionally a description, but [its value 
> constructing 
> function|https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/types/#row]
>  only supports an arbitrary list of expressions.
> This prevents users from providing human-readable names for the fields 
> provded to a {{ROW()}} or {{()}} value function call, resulting in 
> system-defined {{EXPR$n}} names that lose their meaning as they are mixed in 
> with other queries.
> For example, the following SQL query:
> {code}
> SELECT (id, name) as struct FROM t1;
> {code}
> results in the following consumable data type for the `ROW` column:
> {code}
> ROW<`EXPR$0` DECIMAL(10, 2), `EXPR$1` STRING> NOT NULL
> {code}
> I'd be happy to contribute to this change, but I need some guidance and 
> pointers on where to start making changes for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31550) Replace deprecated TableSchema with Schema in OperationConverterUtils

2023-04-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-31550.
---
Resolution: Invalid

I might checkout an outdated branch. Thank you for pointing out this. 

> Replace deprecated TableSchema with Schema in OperationConverterUtils
> -
>
> Key: FLINK-31550
> URL: https://issues.apache.org/jira/browse/FLINK-31550
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31629) Trying to access closed classloader when submit query to restSqlGateway via SqlClient

2023-04-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-31629:

Component/s: Connectors / Hive

> Trying to access closed classloader when submit query to restSqlGateway via 
> SqlClient
> -
>
> Key: FLINK-31629
> URL: https://issues.apache.org/jira/browse/FLINK-31629
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
> Attachments: screenshot-1.png
>
>
> When I attempted to resubmit the same SQL job(Using HiveCatalog) to 
> SqlGateway through SqlClient, I encountered the error shown in the figure.
> !screenshot-1.png|width=649,height=263!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31629) Trying to access closed classloader when submit query to restSqlGateway via SqlClient

2023-04-08 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709879#comment-17709879
 ] 

Jark Wu commented on FLINK-31629:
-

cc [~fsk119] [~luoyuxia]

> Trying to access closed classloader when submit query to restSqlGateway via 
> SqlClient
> -
>
> Key: FLINK-31629
> URL: https://issues.apache.org/jira/browse/FLINK-31629
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
> Attachments: screenshot-1.png
>
>
> When I attempted to resubmit the same SQL job(Using HiveCatalog) to 
> SqlGateway through SqlClient, I encountered the error shown in the figure.
> !screenshot-1.png|width=649,height=263!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31606) Translate "sqlClient.md" page of "table" into Chinese

2023-04-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-31606:
---

Assignee: Fang Yong

> Translate "sqlClient.md" page of "table" into Chinese
> -
>
> Key: FLINK-31606
> URL: https://issues.apache.org/jira/browse/FLINK-31606
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.18.0
>Reporter: Fang Yong
>Assignee: Fang Yong
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31607) Refactor the logic for HiveExecutableOperation

2023-04-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-31607:

Component/s: Connectors / Hive

> Refactor the logic for HiveExecutableOperation
> --
>
> Key: FLINK-31607
> URL: https://issues.apache.org/jira/browse/FLINK-31607
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: luoyuxia
>Priority: Major
>
> In FLINK-31409, we will warp the extra operation customized for Hive with 
> {{HiveExecutableOperation.}}
> We should refactor it to make each of the extra operation customized for Hive 
> execute it's own logic in it's own inner `execute`  method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] wuchong closed pull request #22276: [FLINK-31603][table-planner] Line break should be removed in create t…

2023-04-08 Thread via GitHub


wuchong closed pull request #22276: [FLINK-31603][table-planner] Line break 
should be removed in create t…
URL: https://github.com/apache/flink/pull/22276


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wuchong commented on pull request #22276: [FLINK-31603][table-planner] Line break should be removed in create t…

2023-04-08 Thread via GitHub


wuchong commented on PR #22276:
URL: https://github.com/apache/flink/pull/22276#issuecomment-1500878784

   I would like to close this PR first. The community suggests reaching a 
consensus before opening a pull request. Feel free to continue the discussion 
on the JIRA issue. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31603) Line break should be removed in create table with-clauses, load module with-clauses and table hints for both keys and values

2023-04-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-31603.
---
Fix Version/s: (was: 1.18.0)
   Resolution: Invalid

I will close this issue first, feel free to continue the discussion in the 
issue. 

> Line break should be removed in create table with-clauses, load module 
> with-clauses and table hints for both keys and values
> 
>
> Key: FLINK-31603
> URL: https://issues.apache.org/jira/browse/FLINK-31603
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0, 1.16.1
> Environment: Flink 1.16.0
>Reporter: Yao Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Given a SQL like this:
> {code:sql}
> CREATE TABLE MyTable (
>   `user_id` BIGINT,
>   `name` STRING,
>   `timestamp` TIMESTAMP_LTZ(3) METADATA
> ) WITH (
>   'connector' = 'kaf
> ka'
>   ...
> );
> {code}
> After parsing the SQL, the option value 'connector' is 'kaf\nka', which will 
> lead to problems.
> The line break inside keys/values in with-clauses and table hints should be 
> removed when parsing SQLs.
> If this is the issue that needs to fix, I would like to do it, as I am 
> currently working on it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31713) k8s operator should gather job version metrics

2023-04-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31713:
---
Labels: pull-request-available  (was: )

> k8s operator should gather job version metrics
> --
>
> Key: FLINK-31713
> URL: https://issues.apache.org/jira/browse/FLINK-31713
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator, Runtime / Metrics
>Affects Versions: kubernetes-operator-1.5.0
>Reporter: Márton Balassi
>Assignee: Mate Czagany
>Priority: Major
>  Labels: pull-request-available
>
> Similarly to the FLINK-31303 we should expose the number of times each Flink 
> version is used in applications on a per namespace basis, this is sufficient 
> for FlinkDeployments imho (no need to try to dig into session jobs) as the 
> main purpose is to be able to gain visibility to the distribution of version 
> used and be able to nudge users along to upgrade.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] mateczagany opened a new pull request, #562: [FLINK-31713] Expose FlinkDeployment version metrics

2023-04-08 Thread via GitHub


mateczagany opened a new pull request, #562:
URL: https://github.com/apache/flink-kubernetes-operator/pull/562

   ## What is the purpose of the change
   
   Expose count of Flink versions on a per namespace basis via metrics
   
   ## Brief change log
   
   - Create new gauge in FlinkDeploymentMetrics for every namespace and every 
Flink version found
   - Update `FlinkDeploymentMetrics#onRemove` to check all metric maps for the 
namespaces
   
   ## Verifying this change
   
   - Added unit test
   - Manually validated locally
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
no
 - Core observer or reconciler logic that is regularly executed: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? not documented
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] WencongLiu commented on pull request #22341: [FLINK-27204] Refract FileSystemJobResultStore to execute I/O operations on the ioExecutor

2023-04-08 Thread via GitHub


WencongLiu commented on PR #22341:
URL: https://github.com/apache/flink/pull/22341#issuecomment-1500819380

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org