[GitHub] [flink-web] leonardBang commented on a diff in pull request #583: [FLINK-30007] Explain how users can request a Jira account
leonardBang commented on code in PR #583: URL: https://github.com/apache/flink-web/pull/583#discussion_r1021172671 ## community.md: ## @@ -170,6 +170,47 @@ Make sure to tag your questions there accordingly to get answers from the Flink ## Issue Tracker We use Jira to track all code related issues: [{{ site.jira }}]({{ site.jira }}). +You must have a JIRA account in order to log cases and issues. + +### I already have an ASF JIRA account and want to be added as a contributor + +If you already have an ASF JIRA account, you do not need to sign up for a new account. +Please email [jira-reque...@flink.apache.org]([jira-reque...@flink.apache.org]) using the following template, so that we can add your account to the +contributors list in JIRA: + +[Open the template in your email client](mailto:jira-reque...@flink.apache.org?subject=Add%20me%20as%20a%20contributor%20to%20JIRA&body=Hello,%0A%0APlease%20add%20me%20as%20a%20contributor%20to%20JIRA.%0AMy%20JIRA%20username%20is:%20%5BINSERT%20YOUR%20JIRA%20USERNAME%20HERE%5D%0A%0AThanks,%0A%5BINSERT%20YOUR%20NAME%20HERE%5D) + +```text +Subject: Add me as a contributor to JIRA + +Hello, + +Please add me as a contributor to JIRA. +My JIRA username is: [INSERT YOUR JIRA USERNAME HERE] + +Thanks, +[INSERT YOUR NAME HERE] +``` + +### I do not have an ASF JIRA account, want to request an account and be added as a contributor + +In order to request an ASF JIRA account, you will need to email [jira-reque...@flink.apache.org]([jira-reque...@flink.apache.org]) using the following template: + +[Open the template in your email client](mailto:jira-reque...@flink.apache.org?subject=Request%20for%20JIRA%20Account&body=Hello,%0A%0AI%20would%20like%20to%20request%20a%20JIRA%20account.%0AMy%20proposed%20JIRA%20username:%20%5BINSERT%20YOUR%20DESIRED%20JIRA%20USERNAME%20HERE%20(LOWERCASE%20LETTERS%20AND%20NUMBERS%20ONLY)%5D%0AMy%20full%20name:%20%5BINSERT%20YOUR%20FULL%20NAME%20HERE%5D%0AMy%20email%20address:%20%5BINSERT%20YOUR%20EMAIL%20ADDRESS%20HERE%5D%0A%0AThanks,%0A%5BINSERT%20YOUR%20NAME%20HERE%5D) + +```text +Subject: Request for JIRA Account + +Hello, + +I would like to request a JIRA account. +My proposed JIRA username: [INSERT YOUR DESIRED JIRA USERNAME HERE (LOWERCASE LETTERS AND NUMBERS ONLY)] Review Comment: Okay, it makes sense to me -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-30015) Benchmarks are failing
Martijn Visser created FLINK-30015: -- Summary: Benchmarks are failing Key: FLINK-30015 URL: https://issues.apache.org/jira/browse/FLINK-30015 Project: Flink Issue Type: Bug Components: Benchmarks Reporter: Martijn Visser {code:java} Build interrupted 1411 of flink-master-benchmarks-regression-check (Open): org.jenkinsci.plugins.workflow.steps.FlowInterruptedException {code} Build 1405 until 1411 have all failed -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-table-store] SteNicholas commented on pull request #375: [FLINK-28552] GenerateUtils#generateCompare supports MULTISET and MAP
SteNicholas commented on PR #375: URL: https://github.com/apache/flink-table-store/pull/375#issuecomment-1313230227 @LadyForest, @JingsongLi, could you please take a look at the support? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] SteNicholas commented on a diff in pull request #378: [FLINK-30013] Add check update column type
SteNicholas commented on code in PR #378: URL: https://github.com/apache/flink-table-store/pull/378#discussion_r1021166499 ## flink-table-store-core/src/main/java/org/apache/flink/table/store/file/schema/SchemaManager.java: ## @@ -201,6 +203,12 @@ public TableSchema commitChanges(List changes) throws Exception { DataType newType = TableSchema.toDataType( update.newLogicalType(), new AtomicInteger(0)); +checkState( +LogicalTypeCasts.supportsImplicitCast( +field.type().logicalType, update.newLogicalType()), +String.format( +"Row type %s cannot be converted to %s without loosing information.", Review Comment: Could this add the field name in the message which is convenient for users to check which field type is incorrect? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] MartijnVisser commented on a diff in pull request #583: [FLINK-30007] Explain how users can request a Jira account
MartijnVisser commented on code in PR #583: URL: https://github.com/apache/flink-web/pull/583#discussion_r1021164836 ## community.md: ## @@ -170,6 +170,47 @@ Make sure to tag your questions there accordingly to get answers from the Flink ## Issue Tracker We use Jira to track all code related issues: [{{ site.jira }}]({{ site.jira }}). +You must have a JIRA account in order to log cases and issues. + +### I already have an ASF JIRA account and want to be added as a contributor + +If you already have an ASF JIRA account, you do not need to sign up for a new account. +Please email [jira-reque...@flink.apache.org]([jira-reque...@flink.apache.org]) using the following template, so that we can add your account to the +contributors list in JIRA: + +[Open the template in your email client](mailto:jira-reque...@flink.apache.org?subject=Add%20me%20as%20a%20contributor%20to%20JIRA&body=Hello,%0A%0APlease%20add%20me%20as%20a%20contributor%20to%20JIRA.%0AMy%20JIRA%20username%20is:%20%5BINSERT%20YOUR%20JIRA%20USERNAME%20HERE%5D%0A%0AThanks,%0A%5BINSERT%20YOUR%20NAME%20HERE%5D) + +```text +Subject: Add me as a contributor to JIRA + +Hello, + +Please add me as a contributor to JIRA. +My JIRA username is: [INSERT YOUR JIRA USERNAME HERE] + +Thanks, +[INSERT YOUR NAME HERE] +``` + +### I do not have an ASF JIRA account, want to request an account and be added as a contributor + +In order to request an ASF JIRA account, you will need to email [jira-reque...@flink.apache.org]([jira-reque...@flink.apache.org]) using the following template: + +[Open the template in your email client](mailto:jira-reque...@flink.apache.org?subject=Request%20for%20JIRA%20Account&body=Hello,%0A%0AI%20would%20like%20to%20request%20a%20JIRA%20account.%0AMy%20proposed%20JIRA%20username:%20%5BINSERT%20YOUR%20DESIRED%20JIRA%20USERNAME%20HERE%20(LOWERCASE%20LETTERS%20AND%20NUMBERS%20ONLY)%5D%0AMy%20full%20name:%20%5BINSERT%20YOUR%20FULL%20NAME%20HERE%5D%0AMy%20email%20address:%20%5BINSERT%20YOUR%20EMAIL%20ADDRESS%20HERE%5D%0A%0AThanks,%0A%5BINSERT%20YOUR%20NAME%20HERE%5D) + +```text +Subject: Request for JIRA Account + +Hello, + +I would like to request a JIRA account. +My proposed JIRA username: [INSERT YOUR DESIRED JIRA USERNAME HERE (LOWERCASE LETTERS AND NUMBERS ONLY)] Review Comment: Let's see if we run into a lot of conflicts first. If we do, then let's change the docs with a little howto check if the username is already taken -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-30014) Fix the NPE from aggregate util
[ https://issues.apache.org/jira/browse/FLINK-30014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiang Xin updated FLINK-30014: -- Description: The following exception is thrown in Flink ML CI step. {code:java} [INFO] Running org.apache.flink.ml.feature.CountVectorizerTest 435Error: Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.419 s <<< FAILURE! - in org.apache.flink.ml.feature.CountVectorizerTest 436Error: testFitAndPredict Time elapsed: 0.66 s <<< ERROR! 437java.lang.RuntimeException: Failed to fetch next result 438 at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) 439 at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) 440 at org.apache.commons.collections.IteratorUtils.toList(IteratorUtils.java:848) 441 at org.apache.commons.collections.IteratorUtils.toList(IteratorUtils.java:825) 442 at org.apache.flink.ml.feature.CountVectorizerTest.verifyPredictionResult(CountVectorizerTest.java:120) 443 at org.apache.flink.ml.feature.CountVectorizerTest.testFitAndPredict(CountVectorizerTest.java:208) 444 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 445 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 446 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 447 at java.lang.reflect.Method.invoke(Method.java:498) 448 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) 449 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) 450 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) 451 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) 452 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 453 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 454 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) 455 at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 456 at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) 457 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) 458 at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) 459 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) 460 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) 461 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) 462 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) 463 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) 464 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) 465 at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) 466 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) 467 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) 468 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) 469 at org.junit.rules.RunRules.evaluate(RunRules.java:20) 470 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) 471 at org.junit.runners.ParentRunner.run(ParentRunner.java:413) 472 at org.junit.runner.JUnitCore.run(JUnitCore.java:137) 473 at org.junit.runner.JUnitCore.run(JUnitCore.java:115) 474 at org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) 475 at org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) 476 at org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) 477 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220) 478 at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188) 479 at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202) 480 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181) 481 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128) 482 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:142) 483 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:109) 484 at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) 485 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) 486 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBoote
[jira] [Updated] (FLINK-30014) Fix the NPE from aggregate util
[ https://issues.apache.org/jira/browse/FLINK-30014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiang Xin updated FLINK-30014: -- Description: The following exception is thrown in Flink ML CI step. ``` [INFO] Running org.apache.flink.ml.feature.CountVectorizerTest [435|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:436]Error: Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.419 s <<< FAILURE! - in org.apache.flink.ml.feature.CountVectorizerTest [436|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:437]Error: testFitAndPredict Time elapsed: 0.66 s <<< ERROR! [437|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:438]java.lang.RuntimeException: Failed to fetch next result [438|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:439] at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) [439|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:440] at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) [440|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:441] at org.apache.commons.collections.IteratorUtils.toList(IteratorUtils.java:848) [441|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:442] at org.apache.commons.collections.IteratorUtils.toList(IteratorUtils.java:825) [442|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:443] at org.apache.flink.ml.feature.CountVectorizerTest.verifyPredictionResult(CountVectorizerTest.java:120) [443|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:444] at org.apache.flink.ml.feature.CountVectorizerTest.testFitAndPredict(CountVectorizerTest.java:208) [444|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:445] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [445|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:446] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [446|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:447] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [447|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:448] at java.lang.reflect.Method.invoke(Method.java:498) [448|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:449] at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) [449|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:450] at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [450|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:451] at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) [451|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:452] at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) [452|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:453] at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) [453|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:454] at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [454|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:455] at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) [455|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:456] at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) [456|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:457] at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) [457|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:458] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [458|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:459] at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) [459|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:460] at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) [460|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:461] at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) [461|https://github.com/a
[jira] [Created] (FLINK-30014) Fix the NPE from aggregate util
Jiang Xin created FLINK-30014: - Summary: Fix the NPE from aggregate util Key: FLINK-30014 URL: https://issues.apache.org/jira/browse/FLINK-30014 Project: Flink Issue Type: Bug Components: Library / Machine Learning Reporter: Jiang Xin Fix For: ml-2.2.0 The following exception is thrown in Flink ML CI step. ``` [INFO] Running org.apache.flink.ml.feature.CountVectorizerTest [435|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:436]Error: Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.419 s <<< FAILURE! - in org.apache.flink.ml.feature.CountVectorizerTest [436|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:437]Error: testFitAndPredict Time elapsed: 0.66 s <<< ERROR! [437|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:438]java.lang.RuntimeException: Failed to fetch next result [438|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:439] at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109) [439|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:440] at org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80) [440|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:441] at org.apache.commons.collections.IteratorUtils.toList(IteratorUtils.java:848) [441|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:442] at org.apache.commons.collections.IteratorUtils.toList(IteratorUtils.java:825) [442|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:443] at org.apache.flink.ml.feature.CountVectorizerTest.verifyPredictionResult(CountVectorizerTest.java:120) [443|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:444] at org.apache.flink.ml.feature.CountVectorizerTest.testFitAndPredict(CountVectorizerTest.java:208) [444|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:445] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [445|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:446] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [446|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:447] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [447|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:448] at java.lang.reflect.Method.invoke(Method.java:498) [448|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:449] at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) [449|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:450] at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [450|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:451] at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) [451|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:452] at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) [452|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:453] at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) [453|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:454] at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) [454|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:455] at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) [455|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:456] at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) [456|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:457] at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) [457|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:458] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) [458|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:459] at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) [459|https://github.com/apache/flink-ml/actions/runs/3459311341/jobs/5774576369#step:4:460] at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) [460|https://githu
[GitHub] [flink-table-store] SteNicholas commented on pull request #379: [FLINK-30012]fix a typo in official Table Store document.
SteNicholas commented on PR #379: URL: https://github.com/apache/flink-table-store/pull/379#issuecomment-1313223244 @houhang1005, thanks for your contribution. Could you please rebase the master branch and squash the commits to one commit? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #437: [FLINK-29609] Shut down JM for terminated applications after configured duration
gyfora commented on code in PR #437: URL: https://github.com/apache/flink-kubernetes-operator/pull/437#discussion_r1021154485 ## flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/ApplicationReconciler.java: ## @@ -318,6 +319,31 @@ private boolean shouldRestartJobBecauseUnhealthy( return restartNeeded; } +private boolean cleanupTerminalJmAfterTtl( +FlinkDeployment deployment, Configuration observeConfig) { +var status = deployment.getStatus(); +boolean terminal = ReconciliationUtils.isJobInTerminalState(status); +boolean jmStillRunning = +status.getJobManagerDeploymentStatus() != JobManagerDeploymentStatus.MISSING; + +if (terminal && jmStillRunning) { +var ttl = observeConfig.get(KubernetesOperatorConfigOptions.OPERATOR_JM_SHUTDOWN_TTL); +boolean ttlPassed = +Instant.now() +.isAfter( +Instant.ofEpochMilli( +Long.parseLong( + status.getJobStatus().getUpdateTime())) +.plus(ttl)); +if (ttlPassed) { +LOG.info("Removing JobManager deployment for terminal application."); +flinkService.deleteClusterDeployment(deployment.getMetadata(), status, false); Review Comment: Flink itself deletes HA metadata for terminal jobs, so this should not be necessary -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] houhang1005 opened a new pull request, #379: [FLINK-30012]fix a typo in official Table Store document.
houhang1005 opened a new pull request, #379: URL: https://github.com/apache/flink-table-store/pull/379 The word "exiting " in "Reorganize exiting data must be achieved by INSERT OVERWRITE." doesn't make sense, I realized that mostly like a typo. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-30013) Add data type compatibility check in SchemaChange.updateColumnType
[ https://issues.apache.org/jira/browse/FLINK-30013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-30013: --- Labels: pull-request-available (was: ) > Add data type compatibility check in SchemaChange.updateColumnType > -- > > Key: FLINK-30013 > URL: https://issues.apache.org/jira/browse/FLINK-30013 > Project: Flink > Issue Type: Sub-task > Components: Table Store >Affects Versions: table-store-0.3.0 >Reporter: Shammon >Priority: Major > Labels: pull-request-available > > Add LogicalTypeCasts.supportsImplicitCast to check operation in > SchemaChange.updateColumnType to avoid data type conversion failures when > reading data -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-table-store] zjureel opened a new pull request, #378: [FLINK-30013] Add check update column type
zjureel opened a new pull request, #378: URL: https://github.com/apache/flink-table-store/pull/378 Check the source and target column types in `SchemaManager` to avoid data type conversion failures when reading data -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] JerryYue-M commented on pull request #21197: [FLINK-29801] OperatorCoordinator need open the way to operate metric…
JerryYue-M commented on PR #21197: URL: https://github.com/apache/flink/pull/21197#issuecomment-1313211460 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-30007) Document how users can request a Jira account / file a bug
[ https://issues.apache.org/jira/browse/FLINK-30007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-30007: --- Labels: pull-request-available (was: ) > Document how users can request a Jira account / file a bug > --- > > Key: FLINK-30007 > URL: https://issues.apache.org/jira/browse/FLINK-30007 > Project: Flink > Issue Type: Improvement > Components: Documentation, Project Website >Reporter: Martijn Visser >Assignee: Martijn Visser >Priority: Major > Labels: pull-request-available > > Follow-up of https://lists.apache.org/thread/y8vx7qr32xny31qq00f1jzpnz4kw8hpg -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-web] leonardBang commented on a diff in pull request #583: [FLINK-30007] Explain how users can request a Jira account
leonardBang commented on code in PR #583: URL: https://github.com/apache/flink-web/pull/583#discussion_r1021149435 ## community.md: ## @@ -170,6 +170,47 @@ Make sure to tag your questions there accordingly to get answers from the Flink ## Issue Tracker We use Jira to track all code related issues: [{{ site.jira }}]({{ site.jira }}). +You must have a JIRA account in order to log cases and issues. + +### I already have an ASF JIRA account and want to be added as a contributor + +If you already have an ASF JIRA account, you do not need to sign up for a new account. +Please email [jira-reque...@flink.apache.org]([jira-reque...@flink.apache.org]) using the following template, so that we can add your account to the +contributors list in JIRA: + +[Open the template in your email client](mailto:jira-reque...@flink.apache.org?subject=Add%20me%20as%20a%20contributor%20to%20JIRA&body=Hello,%0A%0APlease%20add%20me%20as%20a%20contributor%20to%20JIRA.%0AMy%20JIRA%20username%20is:%20%5BINSERT%20YOUR%20JIRA%20USERNAME%20HERE%5D%0A%0AThanks,%0A%5BINSERT%20YOUR%20NAME%20HERE%5D) + +```text +Subject: Add me as a contributor to JIRA + +Hello, + +Please add me as a contributor to JIRA. +My JIRA username is: [INSERT YOUR JIRA USERNAME HERE] + +Thanks, +[INSERT YOUR NAME HERE] +``` + +### I do not have an ASF JIRA account, want to request an account and be added as a contributor + +In order to request an ASF JIRA account, you will need to email [jira-reque...@flink.apache.org]([jira-reque...@flink.apache.org]) using the following template: + +[Open the template in your email client](mailto:jira-reque...@flink.apache.org?subject=Request%20for%20JIRA%20Account&body=Hello,%0A%0AI%20would%20like%20to%20request%20a%20JIRA%20account.%0AMy%20proposed%20JIRA%20username:%20%5BINSERT%20YOUR%20DESIRED%20JIRA%20USERNAME%20HERE%20(LOWERCASE%20LETTERS%20AND%20NUMBERS%20ONLY)%5D%0AMy%20full%20name:%20%5BINSERT%20YOUR%20FULL%20NAME%20HERE%5D%0AMy%20email%20address:%20%5BINSERT%20YOUR%20EMAIL%20ADDRESS%20HERE%5D%0A%0AThanks,%0A%5BINSERT%20YOUR%20NAME%20HERE%5D) + +```text +Subject: Request for JIRA Account + +Hello, + +I would like to request a JIRA account. +My proposed JIRA username: [INSERT YOUR DESIRED JIRA USERNAME HERE (LOWERCASE LETTERS AND NUMBERS ONLY)] Review Comment: User need try this process again if username conflicts happens? Could we offer guidance to check the desired username will be conflicted with existed usernames or not? ## community.zh.md: ## @@ -167,6 +167,49 @@ Committer 们会关注 [Stack Overflow](http://stackoverflow.com/questions/tagge ## Issue 追踪 我们使用 Jira 进行所有代码相关的 issues 追踪 [{{ site.jira }}]({{ site.jira }})。 +所有门票必须是英文的。 + +您必须拥有 JIRA 帐户才能记录案例和问题。 + +### 我已经有一个 ASF JIRA 帐户并希望被添加为贡献者 + +如果您已经拥有 ASF JIRA 帐户,则无需注册新帐户。 +请使用以下模板向 [jira-reque...@flink.apache.org]([jira-reque...@flink.apache.org]) 发送电子邮件,以便我们将您的帐户添加到 +JIRA 中的贡献者列表: + +[在您的电子邮件客户端中打开模板](mailto:jira-reque...@flink.apache.org?subject=Add%20me%20as%20a%20contributor%20to%20JIRA&body=Hello,%0A%0APlease%20add%20me%20as%20a%20contributor%20to%20JIRA.%0AMy%20JIRA%20username%20is:%20%5BINSERT%20YOUR%20JIRA%20USERNAME%20HERE%5D%0A%0AThanks,%0A%5BINSERT%20YOUR%20NAME%20HERE%5D) + +```text +Subject: Add me as a contributor to JIRA + +Hello, + +Please add me as a contributor to JIRA. +My JIRA username is: [INSERT YOUR JIRA USERNAME HERE] + +Thanks, +[INSERT YOUR NAME HERE] +``` + +### 我没有 ASF JIRA 帐户,想申请一个帐户并添加为贡献者 + +要申请 ASF JIRA 帐户,您需要使用以下模板向 [jira-reque...@flink.apache.org]([jira-reque...@flink.apache.org]) 发送电子邮件: + +[在您的电子邮件客户端中打开模板](mailto:jira-reque...@flink.apache.org?subject=Request%20for%20JIRA%20Account&body=Hello,%0A%0AI%20would%20like%20to%20request%20a%20JIRA%20account.%0AMy%20proposed%20JIRA%20username:%20%5BINSERT%20YOUR%20DESIRED%20JIRA%20USERNAME%20HERE%20(LOWERCASE%20LETTERS%20AND%20NUMBERS%20ONLY)%5D%0AMy%20full%20name:%20%5BINSERT%20YOUR%20FULL%20NAME%20HERE%5D%0AMy%20email%20address:%20%5BINSERT%20YOUR%20EMAIL%20ADDRESS%20HERE%5D%0A%0AThanks,%0A%5BINSERT%20YOUR%20NAME%20HERE%5D) + +```text +Subject: Request for JIRA Account + +Hello, + +I would like to request a JIRA account. +My proposed JIRA username: [INSERT YOUR DESIRED JIRA USERNAME HERE (LOWERCASE LETTERS AND NUMBERS ONLY)] +My full name: [INSERT YOUR FULL NAME HERE] +My email address: [INSERT YOUR EMAIL ADDRESS HERE] + +Thanks, +[INSERT YOUR NAME HERE] Review Comment: ```suggestion ### 我没有 ASF JIRA 账号,想申请一个账号并将其添加为贡献者 为了申请 ASF JIRA 账号,您需要按照下述邮件模板向 [jira-reque...@flink.apache.org]([jira-reque...@flink.apache.org]) 发送电子邮件: [在您的电子邮件客户端中打开模板](mailto:jira-reque...@flink.apache.org?subject=Request%20for%20JIRA%20Account&body=Hello,%0A%0AI%20would%20like%20to%20request%20a%20JIRA%20account.%0AMy%20proposed%20JIRA%20username:%20%5BINSERT%20YOUR%20DESIRED%20JIRA%20USERNAME%20HERE%20(LOWERCASE%20LETTERS%20AND%20NUMBERS%20ONLY)%5D%0AMy%20full%20name:%20%5BINSERT%20YOUR%20FULL%20NAME%20HERE%5D%0AMy%20email
[GitHub] [flink-table-store] wxplovecc commented on a diff in pull request #357: [FLINK-29922] Support create external table for hive catalog
wxplovecc commented on code in PR #357: URL: https://github.com/apache/flink-table-store/pull/357#discussion_r1021144454 ## flink-table-store-core/src/main/java/org/apache/flink/table/store/table/TableType.java: ## @@ -0,0 +1,48 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.table; + +import org.apache.flink.configuration.DescribedEnum; +import org.apache.flink.configuration.description.InlineElement; + +import static org.apache.flink.configuration.description.TextElement.text; + +/** Enum of catalog table type. */ +public enum TableType implements DescribedEnum { +MANAGED("MANAGED_TABLE", "Hive manage the lifecycle of the table."), +EXTERNAL("EXTERNAL_TABLE", "Files are already present or in remote locations."); Review Comment: updated @SteNicholas -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-29604) Add Estimator and Transformer for CountVectorizer
[ https://issues.apache.org/jira/browse/FLINK-29604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-29604: --- Labels: pull-request-available (was: ) > Add Estimator and Transformer for CountVectorizer > - > > Key: FLINK-29604 > URL: https://issues.apache.org/jira/browse/FLINK-29604 > Project: Flink > Issue Type: New Feature > Components: Library / Machine Learning >Affects Versions: ml-2.2.0 >Reporter: Yunfeng Zhou >Priority: Major > Labels: pull-request-available > Fix For: ml-2.2.0 > > > Add Estimator and Transformer for CountVectorizer. > Its function would be at least equivalent to Spark's > org.apache.spark.ml.feature.CountVectorizer. The relevant PR should contain > the following components: > * Java implementation/test (Must include) > * Python implementation/test (Optional) > * Markdown document (Optional) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-ml] jiangxin369 opened a new pull request, #174: [FLINK-29604] Add Estimator and Transformer for CountVectorizer
jiangxin369 opened a new pull request, #174: URL: https://github.com/apache/flink-ml/pull/174 ## What is the purpose of the change Add Estimator and Transformer for CountVectorizer. ## Brief change log - Adds Transformer and Estimator implementation of CountVectorizer in Java and Python. - Adds examples and documentation of CountVectorizer. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) ## Documentation - Does this pull request introduce a new feature? (yes) - If yes, how is the feature documented? (docs / JavaDocs) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-30010) flink-quickstart-test failed due to could not resolve dependencies
[ https://issues.apache.org/jira/browse/FLINK-30010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-30010. Fix Version/s: 1.17.0 Resolution: Fixed master: 7ebe31829c8cf9b4452514284ce0ca298ab746ff > flink-quickstart-test failed due to could not resolve dependencies > --- > > Key: FLINK-30010 > URL: https://issues.apache.org/jira/browse/FLINK-30010 > Project: Flink > Issue Type: Bug > Components: Examples, Tests >Affects Versions: 1.17.0 >Reporter: Leonard Xu >Assignee: Chesnay Schepler >Priority: Major > Fix For: 1.17.0 > > > {noformat} > Nov 13 02:10:37 [ERROR] Failed to execute goal on project > flink-quickstart-test: Could not resolve dependencies for project > org.apache.flink:flink-quickstart-test:jar:1.17-SNAPSHOT: Could not find > artifact org.apache.flink:flink-quickstart-scala:jar:1.17-SNAPSHOT in > apache.snapshots (https://repository.apache.org/snapshots) -> [Help 1] > Nov 13 02:10:37 [ERROR] > Nov 13 02:10:37 [ERROR] To see the full stack trace of the errors, re-run > Maven with the -e switch. > Nov 13 02:10:37 [ERROR] Re-run Maven using the -X switch to enable full debug > logging. > Nov 13 02:10:37 [ERROR] > Nov 13 02:10:37 [ERROR] For more information about the errors and possible > solutions, please read the following articles: > Nov 13 02:10:37 [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException > Nov 13 02:10:37 [ERROR] > Nov 13 02:10:37 [ERROR] After correcting the problems, you can resume the > build with the command > Nov 13 02:10:37 [ERROR] mvn -rf :flink-quickstart-test > Nov 13 02:10:38 Process exited with EXIT CODE: 1. > Nov 13 02:10:38 Trying to KILL watchdog (293). > /__w/1/s/tools/ci/watchdog.sh: line 100: 293 Terminated > watchdog > Nov 13 02:10:38 > == > Nov 13 02:10:38 Compilation failure detected, skipping test execution. > Nov 13 02:10:38 > == > {noformat} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43102&view=logs&j=298e20ef-7951-5965-0e79-ea664ddc435e&t=d4c90338-c843-57b0-3232-10ae74f00347&l=18363 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-30010) flink-quickstart-test failed due to could not resolve dependencies
[ https://issues.apache.org/jira/browse/FLINK-30010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler updated FLINK-30010: - Issue Type: Technical Debt (was: Bug) > flink-quickstart-test failed due to could not resolve dependencies > --- > > Key: FLINK-30010 > URL: https://issues.apache.org/jira/browse/FLINK-30010 > Project: Flink > Issue Type: Technical Debt > Components: Examples, Tests >Affects Versions: 1.17.0 >Reporter: Leonard Xu >Assignee: Chesnay Schepler >Priority: Major > Fix For: 1.17.0 > > > {noformat} > Nov 13 02:10:37 [ERROR] Failed to execute goal on project > flink-quickstart-test: Could not resolve dependencies for project > org.apache.flink:flink-quickstart-test:jar:1.17-SNAPSHOT: Could not find > artifact org.apache.flink:flink-quickstart-scala:jar:1.17-SNAPSHOT in > apache.snapshots (https://repository.apache.org/snapshots) -> [Help 1] > Nov 13 02:10:37 [ERROR] > Nov 13 02:10:37 [ERROR] To see the full stack trace of the errors, re-run > Maven with the -e switch. > Nov 13 02:10:37 [ERROR] Re-run Maven using the -X switch to enable full debug > logging. > Nov 13 02:10:37 [ERROR] > Nov 13 02:10:37 [ERROR] For more information about the errors and possible > solutions, please read the following articles: > Nov 13 02:10:37 [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException > Nov 13 02:10:37 [ERROR] > Nov 13 02:10:37 [ERROR] After correcting the problems, you can resume the > build with the command > Nov 13 02:10:37 [ERROR] mvn -rf :flink-quickstart-test > Nov 13 02:10:38 Process exited with EXIT CODE: 1. > Nov 13 02:10:38 Trying to KILL watchdog (293). > /__w/1/s/tools/ci/watchdog.sh: line 100: 293 Terminated > watchdog > Nov 13 02:10:38 > == > Nov 13 02:10:38 Compilation failure detected, skipping test execution. > Nov 13 02:10:38 > == > {noformat} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43102&view=logs&j=298e20ef-7951-5965-0e79-ea664ddc435e&t=d4c90338-c843-57b0-3232-10ae74f00347&l=18363 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-30010) flink-quickstart-test failed due to could not resolve dependencies
[ https://issues.apache.org/jira/browse/FLINK-30010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler reassigned FLINK-30010: Assignee: Chesnay Schepler > flink-quickstart-test failed due to could not resolve dependencies > --- > > Key: FLINK-30010 > URL: https://issues.apache.org/jira/browse/FLINK-30010 > Project: Flink > Issue Type: Bug > Components: Examples, Tests >Affects Versions: 1.17.0 >Reporter: Leonard Xu >Assignee: Chesnay Schepler >Priority: Major > > {noformat} > Nov 13 02:10:37 [ERROR] Failed to execute goal on project > flink-quickstart-test: Could not resolve dependencies for project > org.apache.flink:flink-quickstart-test:jar:1.17-SNAPSHOT: Could not find > artifact org.apache.flink:flink-quickstart-scala:jar:1.17-SNAPSHOT in > apache.snapshots (https://repository.apache.org/snapshots) -> [Help 1] > Nov 13 02:10:37 [ERROR] > Nov 13 02:10:37 [ERROR] To see the full stack trace of the errors, re-run > Maven with the -e switch. > Nov 13 02:10:37 [ERROR] Re-run Maven using the -X switch to enable full debug > logging. > Nov 13 02:10:37 [ERROR] > Nov 13 02:10:37 [ERROR] For more information about the errors and possible > solutions, please read the following articles: > Nov 13 02:10:37 [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException > Nov 13 02:10:37 [ERROR] > Nov 13 02:10:37 [ERROR] After correcting the problems, you can resume the > build with the command > Nov 13 02:10:37 [ERROR] mvn -rf :flink-quickstart-test > Nov 13 02:10:38 Process exited with EXIT CODE: 1. > Nov 13 02:10:38 Trying to KILL watchdog (293). > /__w/1/s/tools/ci/watchdog.sh: line 100: 293 Terminated > watchdog > Nov 13 02:10:38 > == > Nov 13 02:10:38 Compilation failure detected, skipping test execution. > Nov 13 02:10:38 > == > {noformat} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43102&view=logs&j=298e20ef-7951-5965-0e79-ea664ddc435e&t=d4c90338-c843-57b0-3232-10ae74f00347&l=18363 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30013) Add data type compatibility check in SchemaChange.updateColumnType
Shammon created FLINK-30013: --- Summary: Add data type compatibility check in SchemaChange.updateColumnType Key: FLINK-30013 URL: https://issues.apache.org/jira/browse/FLINK-30013 Project: Flink Issue Type: Sub-task Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Shammon Add LogicalTypeCasts.supportsImplicitCast to check operation in SchemaChange.updateColumnType to avoid data type conversion failures when reading data -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-table-store] houhang1005 closed pull request #377: [FLINK-30012]A typo in official Table Store document.
houhang1005 closed pull request #377: [FLINK-30012]A typo in official Table Store document. URL: https://github.com/apache/flink-table-store/pull/377 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-30012) A typo in official Table Store document.
[ https://issues.apache.org/jira/browse/FLINK-30012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-30012: --- Labels: pull-request-available (was: ) > A typo in official Table Store document. > > > Key: FLINK-30012 > URL: https://issues.apache.org/jira/browse/FLINK-30012 > Project: Flink > Issue Type: Improvement > Components: Table Store >Affects Versions: 1.16.0 > Environment: Flink 1.16.0 >Reporter: Hang HOU >Priority: Minor > Labels: pull-request-available > > Found a typo in Rescale Bucket document which is "exiting". > [Rescale > Bucket|https://nightlies.apache.org/flink/flink-table-store-docs-release-0.2/docs/development/rescale-bucket/#rescale-bucket] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-table-store] houhang1005 opened a new pull request, #377: [FLINK-30012]A typo in official Table Store document.
houhang1005 opened a new pull request, #377: URL: https://github.com/apache/flink-table-store/pull/377 The word "exiting " in "Reorganize exiting data must be achieved by INSERT OVERWRITE." doesn't make sense, I realized that mostly like a typo. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-30012) A typo in official Table Store document.
Hang HOU created FLINK-30012: Summary: A typo in official Table Store document. Key: FLINK-30012 URL: https://issues.apache.org/jira/browse/FLINK-30012 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: 1.16.0 Environment: Flink 1.16.0 Reporter: Hang HOU Found a typo in Rescale Bucket document which is "exiting". [Rescale Bucket|https://nightlies.apache.org/flink/flink-table-store-docs-release-0.2/docs/development/rescale-bucket/#rescale-bucket] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-table-store] zjureel commented on pull request #376: [FLINK-27843] Schema evolution for data file meta
zjureel commented on PR #376: URL: https://github.com/apache/flink-table-store/pull/376#issuecomment-1313180090 Hi @JingsongLi @tsreaper Can you help to review this PR when you're free? THX -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-29595) Add Estimator and Transformer for ChiSqSelector
[ https://issues.apache.org/jira/browse/FLINK-29595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-29595: --- Labels: pull-request-available (was: ) > Add Estimator and Transformer for ChiSqSelector > --- > > Key: FLINK-29595 > URL: https://issues.apache.org/jira/browse/FLINK-29595 > Project: Flink > Issue Type: New Feature > Components: Library / Machine Learning >Affects Versions: ml-2.2.0 >Reporter: Yunfeng Zhou >Priority: Major > Labels: pull-request-available > Fix For: ml-2.2.0 > > > Add the Estimator and Transformer for ChiSqSelector. > Its function would be at least equivalent to Spark's > org.apache.spark.ml.feature.ChiSqSelector. The relevant PR should contain the > following components: > * Java implementation/test (Must include) > * Python implementation/test (Optional) > * Markdown document (Optional) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-ml] yunfengzhou-hub opened a new pull request, #173: [FLINK-29595] Add Estimator and Transformer for ChiSqSelector
yunfengzhou-hub opened a new pull request, #173: URL: https://github.com/apache/flink-ml/pull/173 ## What is the purpose of the change This PR adds the Estimator and Transformer for the Chi-square selector algorithm. ## Brief change log - Adds Transformer and Estimator implementation of Chi-square selector in Java and Python - Adds examples and documentation of Chi-square selector ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) ## Documentation - Does this pull request introduce a new feature? (yes) - If yes, how is the feature documented? (docs / JavaDocs) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-29549) Add Aws Glue Catalog support in Flink
[ https://issues.apache.org/jira/browse/FLINK-29549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Samrat Deb updated FLINK-29549: --- Summary: Add Aws Glue Catalog support in Flink (was: Flink sql to add support of using AWS glue as metastore) > Add Aws Glue Catalog support in Flink > - > > Key: FLINK-29549 > URL: https://issues.apache.org/jira/browse/FLINK-29549 > Project: Flink > Issue Type: Improvement > Components: Connectors / Common, Connectors / Hive >Reporter: Samrat Deb >Priority: Major > > Currently , Flink sql hive connector support hive metastore as hardcoded > metastore-uri. > It would be good if flink provide feature to have configurable metastore (eg. > AWS glue). > This would help many Users of flink who uses AWS > Glue([https://docs.aws.amazon.com/glue/latest/dg/start-data-catalog.html]) as > their common(unified) catalog and process data. > cc [~prabhujoseph] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] link3280 commented on pull request #21292: [FLINK-28617][SQL Gateway] Support stop job statement in SqlGatewayService
link3280 commented on PR #21292: URL: https://github.com/apache/flink/pull/21292#issuecomment-1313164112 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] SteNicholas commented on a diff in pull request #357: [FLINK-29922] Support create external table for hive catalog
SteNicholas commented on code in PR #357: URL: https://github.com/apache/flink-table-store/pull/357#discussion_r1021116470 ## flink-table-store-core/src/main/java/org/apache/flink/table/store/table/TableType.java: ## @@ -0,0 +1,48 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.table; + +import org.apache.flink.configuration.DescribedEnum; +import org.apache.flink.configuration.description.InlineElement; + +import static org.apache.flink.configuration.description.TextElement.text; + +/** Enum of catalog table type. */ +public enum TableType implements DescribedEnum { +MANAGED("MANAGED_TABLE", "Hive manage the lifecycle of the table."), +EXTERNAL("EXTERNAL_TABLE", "Files are already present or in remote locations."); Review Comment: ```suggestion EXTERNAL("EXTERNAL_TABLE", "The table where Table Store has loose coupling with the data stored in external locations."); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] SteNicholas commented on a diff in pull request #357: [FLINK-29922] Support create external table for hive catalog
SteNicholas commented on code in PR #357: URL: https://github.com/apache/flink-table-store/pull/357#discussion_r1021115008 ## flink-table-store-core/src/main/java/org/apache/flink/table/store/table/TableType.java: ## @@ -0,0 +1,48 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.table; + +import org.apache.flink.configuration.DescribedEnum; +import org.apache.flink.configuration.description.InlineElement; + +import static org.apache.flink.configuration.description.TextElement.text; + +/** Enum of catalog table type. */ +public enum TableType implements DescribedEnum { +MANAGED("MANAGED_TABLE", "Hive manage the lifecycle of the table."), Review Comment: ```suggestion MANAGED("MANAGED_TABLE", "Table Store owned table where the entire lifecycle of the table data is managed."), ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-table-store] SteNicholas commented on a diff in pull request #357: [FLINK-29922] Support create external table for hive catalog
SteNicholas commented on code in PR #357: URL: https://github.com/apache/flink-table-store/pull/357#discussion_r1021115008 ## flink-table-store-core/src/main/java/org/apache/flink/table/store/table/TableType.java: ## @@ -0,0 +1,48 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.table; + +import org.apache.flink.configuration.DescribedEnum; +import org.apache.flink.configuration.description.InlineElement; + +import static org.apache.flink.configuration.description.TextElement.text; + +/** Enum of catalog table type. */ +public enum TableType implements DescribedEnum { +MANAGED("MANAGED_TABLE", "Hive manage the lifecycle of the table."), Review Comment: ```suggestion MANAGED("MANAGED_TABLE", "Table Store owns the table where the entire lifecycle of the table data is managed."), ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] SmirAlex commented on pull request #20919: [FLINK-29405] Fix unstable test InputFormatCacheLoaderTest
SmirAlex commented on PR #20919: URL: https://github.com/apache/flink/pull/20919#issuecomment-1313157507 > Waiting forever in production code is super sketchy and should virtually never be done. > > The PR is also lacking a sort of problem analysis and explanation for how this fixes the issue. Hi @zentol, I added timeout on wait after interrupt and updated PR description to explain the problem and proposed solution more precisely. Can you check the latest commit, please? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] 1996fanrui commented on pull request #21303: [FLINK-30002][checkpoint] Change the alignmentTimeout to alignedCheckpointTimeout
1996fanrui commented on PR #21303: URL: https://github.com/apache/flink/pull/21303#issuecomment-1313153287 Hi @pnowojski , please help take a look in your free time, thanks~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] 1996fanrui commented on pull request #21193: [hotfix] Add the final and fix typo
1996fanrui commented on PR #21193: URL: https://github.com/apache/flink/pull/21193#issuecomment-1313152013 Hi @zentol , please help take a look in your free time, thanks~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] 1996fanrui commented on pull request #21304: [FLINK-30003][rpc] Wait the scheduler future is done before check
1996fanrui commented on PR #21304: URL: https://github.com/apache/flink/pull/21304#issuecomment-1313151567 Hi @zentol , it's caused by FLINK-29249, please help take a look in your free time, thanks~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] leonardBang commented on pull request #21308: [hotfix][docs][table] Fix versioned table example
leonardBang commented on PR #21308: URL: https://github.com/apache/flink/pull/21308#issuecomment-1313149921 Thanks @lincoln-lil for the contribution, Could you back port the fix for `release-1.15` and `release-1.16` branch? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] leonardBang merged pull request #21308: [hotfix][docs][table] Fix versioned table example
leonardBang merged PR #21308: URL: https://github.com/apache/flink/pull/21308 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-29913) Shared state would be discarded by mistake when maxConcurrentCheckpoint>1
[ https://issues.apache.org/jira/browse/FLINK-29913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633506#comment-17633506 ] Congxian Qiu commented on FLINK-29913: -- sorry for the late reply. [~Yanfei Lei] for the priority, IMHO, if the user set \{{ maxConcurrenctCheckpoint > 1 && MAX_RETAINED_CHECKPOINTS > 1 }} , then the checkpoints may be broken, and can't restore from the checkpoint because of the {{{}FileNotFoundException{}}}, so I think it deserves to escalate the priority. [~roman] your proposal seems valid from my perspective, maybe changing the logic for {{generating the registry key(perhaps using the filename in the remote filesystem)is enough to solve the problem here?}} please let me what do you think about this, thanks. > Shared state would be discarded by mistake when maxConcurrentCheckpoint>1 > - > > Key: FLINK-29913 > URL: https://issues.apache.org/jira/browse/FLINK-29913 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.15.0, 1.16.0 >Reporter: Yanfei Lei >Priority: Minor > > When maxConcurrentCheckpoint>1, the shared state of Incremental rocksdb state > backend would be discarded by registering the same name handle. See > [https://github.com/apache/flink/pull/21050#discussion_r1011061072] > cc [~roman] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-table-store] wxplovecc commented on a diff in pull request #357: [FLINK-29922] Support create external table for hive catalog
wxplovecc commented on code in PR #357: URL: https://github.com/apache/flink-table-store/pull/357#discussion_r1021091587 ## flink-table-store-hive/flink-table-store-hive-catalog/src/main/java/org/apache/flink/table/store/hive/HiveCatalog.java: ## @@ -226,6 +227,13 @@ public void createTable(ObjectPath tablePath, UpdateSchema updateSchema, boolean e); } Table table = newHmsTable(tablePath); + +if (hiveConf.getEnum(TABLE_TYPE.key(), TableType.MANAGED_TABLE) Review Comment: done ## flink-table-store-core/src/main/java/org/apache/flink/table/store/table/TableType.java: ## @@ -0,0 +1,25 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.store.table; + +/** Enum of catalog table type. */ +public enum TableType { +MANAGED_TABLE, Review Comment: updated -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] lincoln-lil commented on pull request #20745: [FLINK-28988] Don't push above filters down into the right table for temporal join
lincoln-lil commented on PR #20745: URL: https://github.com/apache/flink/pull/20745#issuecomment-1313117021 @shuiqiangchen great! Please ping me here once you've fixed the tests and I can help review this pr before merging. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] lincoln-lil commented on pull request #21308: [hotfix][docs][table] Fix versioned table example
lincoln-lil commented on PR #21308: URL: https://github.com/apache/flink/pull/21308#issuecomment-1313114452 @leonardBang thanks for reviewing this! I've updated the description for the change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] lincoln-lil commented on a diff in pull request #21308: [hotfix][docs][table] Fix versioned table example
lincoln-lil commented on code in PR #21308: URL: https://github.com/apache/flink/pull/21308#discussion_r1021084345 ## docs/content/docs/dev/table/concepts/versioned_tables.md: ## @@ -152,9 +153,9 @@ WHERE rownum = 1; +(INSERT)09:00:00 Yen102 +(INSERT)09:00:00 Euro 114 +(INSERT)09:00:00 USD1 -+(UPDATE_AFTER) 10:45:00 Euro 116 +(UPDATE_AFTER) 11:15:00 Euro 119 -+(INSERT)11:49:00 Pounds 108 ++(INSERT)11:45:00 Pounds 107 ++(UPDATE_AFTER) 11:49:00 Pounds 108 Review Comment: it's better to add more 'update_after' lines (not just one) for better understanding -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] houhang1005 commented on pull request #21268: [FLINK-29952][table-api]Append the detail of the exception when drop tamporary table.
houhang1005 commented on PR #21268: URL: https://github.com/apache/flink/pull/21268#issuecomment-1313110569 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] link3280 commented on pull request #21292: [FLINK-28617][SQL Gateway] Support stop job statement in SqlGatewayService
link3280 commented on PR #21292: URL: https://github.com/apache/flink/pull/21292#issuecomment-1313103656 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-kubernetes-operator] rgsriram commented on a diff in pull request #437: [FLINK-29609] Shut down JM for terminated applications after configured duration
rgsriram commented on code in PR #437: URL: https://github.com/apache/flink-kubernetes-operator/pull/437#discussion_r1020282292 ## flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/ApplicationReconciler.java: ## @@ -318,6 +319,31 @@ private boolean shouldRestartJobBecauseUnhealthy( return restartNeeded; } +private boolean cleanupTerminalJmAfterTtl( +FlinkDeployment deployment, Configuration observeConfig) { +var status = deployment.getStatus(); +boolean terminal = ReconciliationUtils.isJobInTerminalState(status); +boolean jmStillRunning = +status.getJobManagerDeploymentStatus() != JobManagerDeploymentStatus.MISSING; + +if (terminal && jmStillRunning) { +var ttl = observeConfig.get(KubernetesOperatorConfigOptions.OPERATOR_JM_SHUTDOWN_TTL); +boolean ttlPassed = +Instant.now() +.isAfter( +Instant.ofEpochMilli( +Long.parseLong( + status.getJobStatus().getUpdateTime())) +.plus(ttl)); +if (ttlPassed) { +LOG.info("Removing JobManager deployment for terminal application."); +flinkService.deleteClusterDeployment(deployment.getMetadata(), status, false); Review Comment: Should we need not delete HA metadata also? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-30001) sql-client.sh start failed
[ https://issues.apache.org/jira/browse/FLINK-30001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633497#comment-17633497 ] xiaohang.li commented on FLINK-30001: - 经查询, 默认情况下 flink 中的 org.apache.flink.table.planner.loader.PlannerModule 模块使用 /tmp 目录来作为临时的工作路径,因此会尝试调用 jave 的 java.nio.file.Files 类来创建这个目录,但是如果 /tmp 目录是一个指向 /mnt/tmp 的符号软链接,这种情况 java.nio.file.Files 类无法处理,从而导致出现报错。需要在sql-client.sh添加临时路径的配置: export JVM_ARGS="-Djava.io.tmpdir=/mnt/tmp > sql-client.sh start failed > -- > > Key: FLINK-30001 > URL: https://issues.apache.org/jira/browse/FLINK-30001 > Project: Flink > Issue Type: Bug > Components: Command Line Client >Affects Versions: 1.16.0, 1.15.2 >Reporter: xiaohang.li >Priority: Major > > [hadoop@master flink-1.15.0]$ ./bin/sql-client.sh > Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR or > HADOOP_CLASSPATH was set. > Setting HBASE_CONF_DIR=/etc/hbase/conf because no HBASE_CONF_DIR was set. > Exception in thread "main" org.apache.flink.table.client.SqlClientException: > Unexpected exception. This is a bug. Please consider filing an issue. > at > org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201) > at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) > Caused by: org.apache.flink.table.api.TableException: Could not instantiate > the executor. Make sure a planner module is on the classpath > at > org.apache.flink.table.client.gateway.context.ExecutionContext.lookupExecutor(ExecutionContext.java:163) > at > org.apache.flink.table.client.gateway.context.ExecutionContext.createTableEnvironment(ExecutionContext.java:111) > at > org.apache.flink.table.client.gateway.context.ExecutionContext.(ExecutionContext.java:66) > at > org.apache.flink.table.client.gateway.context.SessionContext.create(SessionContext.java:247) > at > org.apache.flink.table.client.gateway.local.LocalContextUtils.buildSessionContext(LocalContextUtils.java:87) > at > org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:87) > at org.apache.flink.table.client.SqlClient.start(SqlClient.java:88) > at > org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) > ... 1 more > Caused by: org.apache.flink.table.api.TableException: Unexpected error when > trying to load service provider for factories. > at > org.apache.flink.table.factories.FactoryUtil.lambda$discoverFactories$19(FactoryUtil.java:813) > at java.util.ArrayList.forEach(ArrayList.java:1259) > at > org.apache.flink.table.factories.FactoryUtil.discoverFactories(FactoryUtil.java:799) > at > org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:517) > at > org.apache.flink.table.client.gateway.context.ExecutionContext.lookupExecutor(ExecutionContext.java:154) > ... 8 more > Caused by: java.util.ServiceConfigurationError: > org.apache.flink.table.factories.Factory: Provider > org.apache.flink.table.planner.loader.DelegateExecutorFactory could not be > instantiated > at java.util.ServiceLoader.fail(ServiceLoader.java:232) > at java.util.ServiceLoader.access$100(ServiceLoader.java:185) > at > java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384) > at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) > at java.util.ServiceLoader$1.next(ServiceLoader.java:480) > at > org.apache.flink.table.factories.ServiceLoaderUtil.load(ServiceLoaderUtil.java:42) > at > org.apache.flink.table.factories.FactoryUtil.discoverFactories(FactoryUtil.java:798) > ... 10 more > Caused by: java.lang.ExceptionInInitializerError > at > org.apache.flink.table.planner.loader.PlannerModule.getInstance(PlannerModule.java:135) > at > org.apache.flink.table.planner.loader.DelegateExecutorFactory.(DelegateExecutorFactory.java:34) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at java.lang.Class.newInstance(Class.java:442) > at > java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380) > ... 14 more > Caused by: org.apache.flink.table.api.TableException: Could not initialize > the table planner components loader. > at > org.apache.flink.table.planner.loader.PlannerModule.(PlannerModule.java:123) > at > org.apache.flink.tab
[jira] [Updated] (FLINK-29688) Build time compatibility check for DynamoDB SDK
[ https://issues.apache.org/jira/browse/FLINK-29688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-29688: --- Labels: pull-request-available (was: ) > Build time compatibility check for DynamoDB SDK > --- > > Key: FLINK-29688 > URL: https://issues.apache.org/jira/browse/FLINK-29688 > Project: Flink > Issue Type: Improvement > Components: Connectors / DynamoDB >Reporter: Danny Cranmer >Priority: Major > Labels: pull-request-available > Fix For: aws-connector-2.0.0 > > > The DynamoDB connector exposes SDK classes to the end user code, and also is > responsible for de/serialization of these classes. Add a build time check to > ensure the client model is binary equivalent of a known good version. This > will prevent us updating the SDK and unexpectedly breaking the > de/serialization. > We use {{japicmp-maven-plugin}} to do something similar for Flink, we can > potentially reuse this. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-connector-aws] dannycranmer opened a new pull request, #22: [FLINK-29688] Add test to detect changes in DynamoDB model
dannycranmer opened a new pull request, #22: URL: https://github.com/apache/flink-connector-aws/pull/22 ## What is the purpose of the change Add test to detect changes in DynamoDB model ## Brief change log * Add test to detect changes in DynamoDB model ## Verifying this change Tests pass ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no - If yes, how is the feature documented? n/a -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] leonardBang commented on a diff in pull request #21308: [hotfix][docs][table] Fix versioned table example
leonardBang commented on code in PR #21308: URL: https://github.com/apache/flink/pull/21308#discussion_r1021064692 ## docs/content/docs/dev/table/concepts/versioned_tables.md: ## @@ -152,9 +153,9 @@ WHERE rownum = 1; +(INSERT)09:00:00 Yen102 +(INSERT)09:00:00 Euro 114 +(INSERT)09:00:00 USD1 -+(UPDATE_AFTER) 10:45:00 Euro 116 +(UPDATE_AFTER) 11:15:00 Euro 119 -+(INSERT)11:49:00 Pounds 108 ++(INSERT)11:45:00 Pounds 107 ++(UPDATE_AFTER) 11:49:00 Pounds 108 Review Comment: Why we need change here? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (FLINK-29962) Exclude Jamon 2.3.1
[ https://issues.apache.org/jira/browse/FLINK-29962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jamie Grier resolved FLINK-29962. - Resolution: Fixed > Exclude Jamon 2.3.1 > --- > > Key: FLINK-29962 > URL: https://issues.apache.org/jira/browse/FLINK-29962 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive, Table SQL / Gateway >Reporter: John Roesler >Assignee: John Roesler >Priority: Minor > Labels: pull-request-available > Fix For: 1.17.0 > > > Hi all, > My Maven mirror is complaining that the pom for jamon-runtime:2.3.1 has a > malformed pom. It looks like it's fixed in jamon-runtime:2.4.1. According to > dependency:tree, Flink already has transitive dependencies on both versions, > so I'm proposing to just exclude the transitive dependency from the > problematic direct dependencies and pin the dependency to 2.4.1. > I'll send a PR shortly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-29962) Exclude Jamon 2.3.1
[ https://issues.apache.org/jira/browse/FLINK-29962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jamie Grier updated FLINK-29962: Fix Version/s: 1.17.0 > Exclude Jamon 2.3.1 > --- > > Key: FLINK-29962 > URL: https://issues.apache.org/jira/browse/FLINK-29962 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive, Table SQL / Gateway >Reporter: John Roesler >Assignee: John Roesler >Priority: Minor > Labels: pull-request-available > Fix For: 1.17.0 > > > Hi all, > My Maven mirror is complaining that the pom for jamon-runtime:2.3.1 has a > malformed pom. It looks like it's fixed in jamon-runtime:2.4.1. According to > dependency:tree, Flink already has transitive dependencies on both versions, > so I'm proposing to just exclude the transitive dependency from the > problematic direct dependencies and pin the dependency to 2.4.1. > I'll send a PR shortly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-29962) Exclude Jamon 2.3.1
[ https://issues.apache.org/jira/browse/FLINK-29962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633495#comment-17633495 ] Jamie Grier commented on FLINK-29962: - Merged in Flink master: 9572cf6b287d71ee9c307546d8cd8f8898137bdd > Exclude Jamon 2.3.1 > --- > > Key: FLINK-29962 > URL: https://issues.apache.org/jira/browse/FLINK-29962 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive, Table SQL / Gateway >Reporter: John Roesler >Assignee: John Roesler >Priority: Minor > Labels: pull-request-available > > Hi all, > My Maven mirror is complaining that the pom for jamon-runtime:2.3.1 has a > malformed pom. It looks like it's fixed in jamon-runtime:2.4.1. According to > dependency:tree, Flink already has transitive dependencies on both versions, > so I'm proposing to just exclude the transitive dependency from the > problematic direct dependencies and pin the dependency to 2.4.1. > I'll send a PR shortly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-connector-aws] dannycranmer closed pull request #20: [FLINK-29444][Connectors/AWS] Syncing parent pom to elasticsearch in prep for release
dannycranmer closed pull request #20: [FLINK-29444][Connectors/AWS] Syncing parent pom to elasticsearch in prep for release URL: https://github.com/apache/flink-connector-aws/pull/20 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-connector-aws] dannycranmer commented on pull request #20: [FLINK-29444][Connectors/AWS] Syncing parent pom to elasticsearch in prep for release
dannycranmer commented on PR #20: URL: https://github.com/apache/flink-connector-aws/pull/20#issuecomment-1313066278 Superseded by https://github.com/apache/flink-connector-aws/pull/21 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-30011) HiveCatalogGenericMetadataTest azure CI failed due to catalog does not exist
[ https://issues.apache.org/jira/browse/FLINK-30011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633492#comment-17633492 ] Leonard Xu commented on FLINK-30011: [~luoyuxia] Could you take a look this ticket? > HiveCatalogGenericMetadataTest azure CI failed due to catalog does not exist > > > Key: FLINK-30011 > URL: https://issues.apache.org/jira/browse/FLINK-30011 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive >Affects Versions: 1.16.1 >Reporter: Leonard Xu >Priority: Major > > {noformat} > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testGetPartitionStats:1212 » Catalog > F... > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_PartitionNotExist:1160 > » Catalog > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_PartitionSpecInvalid_invalidPartitionSpec:1124 > » Catalog > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_PartitionSpecInvalid_sizeNotEqual:1139 > » Catalog > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_TableNotPartitioned:1110 > » Catalog > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testGetTableStats_TableNotExistException:1201 > » Catalog > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testGetTable_TableNotExistException:323 > » Catalog > Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest.testHiveStatistics:251 > » Catalog Failed to create ... > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testListFunctions:749 » Catalog > Failed... > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testListPartitionPartialSpec:1188 » > Catalog > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testListTables:498 » Catalog Failed > to... > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testListView:620 » Catalog Failed to > c... > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testPartitionExists:1174 » Catalog > Fai... > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_TableAlreadyExistException:483 > » Catalog > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_TableNotExistException:465 > » Catalog > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_TableNotExistException_ignored:477 > » Catalog > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_nonPartitionedTable:451 > » Catalog > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testRenameView:637 » Catalog Failed > to... > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest>CatalogTest.testTableExists:510 » Catalog Failed > t... > Nov 13 01:55:18 [ERROR] > HiveCatalogHiveMetadataTest.testViewCompatibility:115 » Catalog Failed to > crea... > Nov 13 01:55:18 [INFO] > Nov 13 01:55:18 [ERROR] Tests run: 361, Failures: 0, Errors: 132, Skipped: 0 > {noformat} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43104&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=d04c9862-880c-52f5-574b-a7a79fef8e0f -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30011) HiveCatalogGenericMetadataTest azure CI failed due to catalog does not exist
Leonard Xu created FLINK-30011: -- Summary: HiveCatalogGenericMetadataTest azure CI failed due to catalog does not exist Key: FLINK-30011 URL: https://issues.apache.org/jira/browse/FLINK-30011 Project: Flink Issue Type: Bug Components: Connectors / Hive Affects Versions: 1.16.1 Reporter: Leonard Xu {noformat} Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testGetPartitionStats:1212 » Catalog F... Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_PartitionNotExist:1160 » Catalog Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_PartitionSpecInvalid_invalidPartitionSpec:1124 » Catalog Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_PartitionSpecInvalid_sizeNotEqual:1139 » Catalog Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_TableNotPartitioned:1110 » Catalog Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testGetTableStats_TableNotExistException:1201 » Catalog Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testGetTable_TableNotExistException:323 » Catalog Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest.testHiveStatistics:251 » Catalog Failed to create ... Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testListFunctions:749 » Catalog Failed... Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testListPartitionPartialSpec:1188 » Catalog Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testListTables:498 » Catalog Failed to... Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testListView:620 » Catalog Failed to c... Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testPartitionExists:1174 » Catalog Fai... Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_TableAlreadyExistException:483 » Catalog Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_TableNotExistException:465 » Catalog Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_TableNotExistException_ignored:477 » Catalog Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_nonPartitionedTable:451 » Catalog Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testRenameView:637 » Catalog Failed to... Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest>CatalogTest.testTableExists:510 » Catalog Failed t... Nov 13 01:55:18 [ERROR] HiveCatalogHiveMetadataTest.testViewCompatibility:115 » Catalog Failed to crea... Nov 13 01:55:18 [INFO] Nov 13 01:55:18 [ERROR] Tests run: 361, Failures: 0, Errors: 132, Skipped: 0 {noformat} https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43104&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=d04c9862-880c-52f5-574b-a7a79fef8e0f -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-28394) Python py36-cython: InvocationError for command install_command.sh fails with exit code 1
[ https://issues.apache.org/jira/browse/FLINK-28394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633491#comment-17633491 ] Leonard Xu commented on FLINK-28394: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43104&view=logs&j=e92ecf6d-e207-5a42-7ff7-528ff0c5b259&t=40fc352e-9b4c-5fd8-363f-628f24b01ec2 > Python py36-cython: InvocationError for command install_command.sh fails with > exit code 1 > - > > Key: FLINK-28394 > URL: https://issues.apache.org/jira/browse/FLINK-28394 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.16.0, 1.15.3 >Reporter: Martijn Visser >Assignee: Huang Xingbo >Priority: Major > Labels: stale-assigned, test-stability > > {code:java} > Jul 05 03:47:22 Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError > Jul 05 03:47:32 Using Python version 3.8.13 (default, Mar 28 2022 11:38:47) > Jul 05 03:47:32 pip_test_code.py success! > Jul 05 03:47:32 py38-cython finish: run-test after 1658.14 seconds > Jul 05 03:47:32 py38-cython start: run-test-post > Jul 05 03:47:32 py38-cython finish: run-test-post after 0.00 seconds > Jul 05 03:47:32 ___ summary > > Jul 05 03:47:32 ERROR: py36-cython: InvocationError for command > /__w/3/s/flink-python/dev/install_command.sh --exists-action w > .tox/.tmp/package/1/apache-flink-1.15.dev0.zip (exited with code 1) > Jul 05 03:47:32 py37-cython: commands succeeded > Jul 05 03:47:32 py38-cython: commands succeeded > Jul 05 03:47:32 cleanup > /__w/3/s/flink-python/.tox/.tmp/package/1/apache-flink-1.15.dev0.zip > Jul 05 03:47:33 tox checks... [FAILED] > Jul 05 03:47:33 Process exited with EXIT CODE: 1. > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37604&view=logs&j=bf5e383b-9fd3-5f02-ca1c-8f788e2e76d3&t=85189c57-d8a0-5c9c-b61d-fc05cfac62cf&l=27789 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FLINK-26827) FlinkSQL和hive整合报错
[ https://issues.apache.org/jira/browse/FLINK-26827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leonard Xu closed FLINK-26827. -- Resolution: Invalid Please update this title to English, feel free to reopen once updated. > FlinkSQL和hive整合报错 > - > > Key: FLINK-26827 > URL: https://issues.apache.org/jira/browse/FLINK-26827 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.13.3 > Environment: 环境:cdh6.2.1 linux系统,j d k1.8 >Reporter: zhushifeng >Priority: Major > Attachments: image-2022-03-24-09-33-31-786.png > > > Topic : FlinkSQL combine with Hive > > *step1:* > environment: > HIVE2.1 > Flink1.13.3 > FlinkCDC2.1 > CDH6.2.1 > > *step2:* > when I do the following thing I come across some problems. For example, > copy the following jar to /flink-1.13.3/lib/ > // Flink's Hive connector > flink-connector-hive_2.11-1.13.3.jar > // Hive dependencies > hive-exec-2.1.0.jar. == hive-exec-2.1.1-cdh6.2.1.jar > // add antlr-runtime if you need to use hive dialect > antlr-runtime-3.5.2.jar > !image-2022-03-24-09-33-31-786.png! > > *step3:* restart the Flink Cluster > # ./start-cluster.sh > # Starting cluster. > # Starting standalonesession daemon on host xuehai-cm. > # Starting taskexecutor daemon on host xuehai-cm. > # Starting taskexecutor daemon on host xuehai-nn. > # Starting taskexecutor daemon on host xuehai-dn. > > *step4:* > CREATE CATALOG myhive WITH ( > 'type' = 'hive', > 'default-database' = 'default', > 'hive-conf-dir' = '/etc/hive/conf' > ); > -- set the HiveCatalog as the current catalog of the session > USE CATALOG myhive; > > *step5:* use the hive > Flink SQL> select * from rptdata.basic_xhsys_user ; > Exception in thread "main" org.apache.flink.table.client.SqlClientException: > Unexpected exception. This is a bug. Please consider filing an issue. > at > org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201) > at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) > Caused by: java.lang.ExceptionInInitializerError > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:348) > at > org.apache.flink.connectors.hive.HiveSourceFileEnumerator.createMRSplits(HiveSourceFileEnumerator.java:94) > at > org.apache.flink.connectors.hive.HiveSourceFileEnumerator.createInputSplits(HiveSourceFileEnumerator.java:71) > at > org.apache.flink.connectors.hive.HiveTableSource.lambda$getDataStream$1(HiveTableSource.java:212) > at > org.apache.flink.connectors.hive.HiveParallelismInference.logRunningTime(HiveParallelismInference.java:107) > at > org.apache.flink.connectors.hive.HiveParallelismInference.infer(HiveParallelismInference.java:95) > at > org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:207) > at > org.apache.flink.connectors.hive.HiveTableSource$1.produceDataStream(HiveTableSource.java:123) > at > org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecTableSourceScan.translateToPlanInternal(CommonExecTableSourceScan.java:96) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:247) > at > org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:114) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134) > at > org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:70) > at > scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) > at scala.collection.Iterator.foreach(Iterator.scala:937) > at scala.collection.Iterator.foreach$(Iterator.scala:937) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1425) > at scala.collection.IterableLike.foreach(IterableLike.scala:70) > at scala.collection.IterableLike.foreach$(IterableLike.scala:69) > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > at scala.collection.TraversableLike.map(TraversableLike.scala:233) > at scala.collection.TraversableLike.map$(TraversableLike.scala:226) > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > at > org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:69) > at > org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165) > at > org.apache.flink.table.api.internal.TableEnvir
[jira] [Resolved] (FLINK-28729) flink hive catalog don't support jdk11
[ https://issues.apache.org/jira/browse/FLINK-28729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leonard Xu resolved FLINK-28729. Resolution: Not A Bug > flink hive catalog don't support jdk11 > -- > > Key: FLINK-28729 > URL: https://issues.apache.org/jira/browse/FLINK-28729 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive >Affects Versions: 1.15.1 >Reporter: jeff-zou >Priority: Major > > when I upgraded jdk to 11,I got the following error: > {code:java} > > org.apache.flink > flink-sql-connector-hive-3.1.2_2.12 > 1.15.1 > {code} > {code:java} > // error > Caused by: java.lang.RuntimeException: Unable to instantiate > org.apache.hadoop.hive.metastore.HiveMetaStoreClient > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1654) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:80) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:115) > ... 84 more > Caused by: java.lang.reflect.InvocationTargetException > at > java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at > java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1652) > ... 87 more > Caused by: MetaException(message:Got exception: java.lang.ClassCastException > class [Ljava.lang.Object; cannot be cast to class [Ljava.net.URI; > ([Ljava.lang.Object; and [Ljava.net.URI; are in module java.base of loader > 'bootstrap')) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1342) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:278) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:210) > ... 92 more > Process finished with exit code -1 > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-29755) PulsarSourceUnorderedE2ECase.testSavepoint failed because of missing TaskManagers
[ https://issues.apache.org/jira/browse/FLINK-29755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633487#comment-17633487 ] Leonard Xu commented on FLINK-29755: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43104&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=15c1d318-5ca8-529f-77a2-d113a700ec34 > PulsarSourceUnorderedE2ECase.testSavepoint failed because of missing > TaskManagers > - > > Key: FLINK-29755 > URL: https://issues.apache.org/jira/browse/FLINK-29755 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.16.0, 1.17.0 >Reporter: Matthias Pohl >Priority: Critical > Labels: test-stability > Attachments: PulsarSourceUnorderedE2ECase.testSavepoint.log > > > [This > build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42325&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a&l=13932] > failed (not exclusively) due to a problem with > {{PulsarSourceUnorderedE2ECase.testSavepoint}}. It seems like there were no > TaskManagers spun up which resulted in the test job failing with a > {{NoResourceAvailableException}}. > {code} > org.apache.flink.runtime.jobmaster.slotpool.DeclarativeSlotPoolBridge [] - > Could not acquire the minimum required resources, failing slot requests. > Acquired: []. Current slot pool status: Registered TMs: 0, registered slots: > 0 free slots: 0 > {code} > I didn't raise this one to critical because it looks like a missing > TaskManager test environment issue. I attached the e2e test-specific logs to > the Jira issue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30010) flink-quickstart-test failed due to could not resolve dependencies
Leonard Xu created FLINK-30010: -- Summary: flink-quickstart-test failed due to could not resolve dependencies Key: FLINK-30010 URL: https://issues.apache.org/jira/browse/FLINK-30010 Project: Flink Issue Type: Bug Components: Examples, Tests Affects Versions: 1.17.0 Reporter: Leonard Xu {noformat} Nov 13 02:10:37 [ERROR] Failed to execute goal on project flink-quickstart-test: Could not resolve dependencies for project org.apache.flink:flink-quickstart-test:jar:1.17-SNAPSHOT: Could not find artifact org.apache.flink:flink-quickstart-scala:jar:1.17-SNAPSHOT in apache.snapshots (https://repository.apache.org/snapshots) -> [Help 1] Nov 13 02:10:37 [ERROR] Nov 13 02:10:37 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. Nov 13 02:10:37 [ERROR] Re-run Maven using the -X switch to enable full debug logging. Nov 13 02:10:37 [ERROR] Nov 13 02:10:37 [ERROR] For more information about the errors and possible solutions, please read the following articles: Nov 13 02:10:37 [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException Nov 13 02:10:37 [ERROR] Nov 13 02:10:37 [ERROR] After correcting the problems, you can resume the build with the command Nov 13 02:10:37 [ERROR] mvn -rf :flink-quickstart-test Nov 13 02:10:38 Process exited with EXIT CODE: 1. Nov 13 02:10:38 Trying to KILL watchdog (293). /__w/1/s/tools/ci/watchdog.sh: line 100: 293 Terminated watchdog Nov 13 02:10:38 == Nov 13 02:10:38 Compilation failure detected, skipping test execution. Nov 13 02:10:38 == {noformat} https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43102&view=logs&j=298e20ef-7951-5965-0e79-ea664ddc435e&t=d4c90338-c843-57b0-3232-10ae74f00347&l=18363 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-18356) flink-table-planner Exit code 137 returned from process
[ https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633485#comment-17633485 ] Leonard Xu commented on FLINK-18356: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43102&view=logs&j=a9db68b9-a7e0-54b6-0f98-010e0aff39e2&t=cdd32e0b-6047-565b-c58f-14054472f1be > flink-table-planner Exit code 137 returned from process > --- > > Key: FLINK-18356 > URL: https://issues.apache.org/jira/browse/FLINK-18356 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines, Tests >Affects Versions: 1.12.0, 1.13.0, 1.14.0, 1.15.0 >Reporter: Piotr Nowojski >Priority: Critical > Labels: pull-request-available, test-stability > Attachments: 1234.jpg, app-profiling_4.gif > > > {noformat} > = test session starts > == > platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1 > cachedir: .tox/py37-cython/.pytest_cache > rootdir: /__w/3/s/flink-python > collected 568 items > pyflink/common/tests/test_configuration.py ..[ > 1%] > pyflink/common/tests/test_execution_config.py ...[ > 5%] > pyflink/dataset/tests/test_execution_environment.py . > ##[error]Exit code 137 returned from process: file name '/bin/docker', > arguments 'exec -i -u 1002 > 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb > /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'. > Finishing: Test - python > {noformat} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729&view=logs&j=9cada3cb-c1d3-5621-16da-0f718fb86602&t=8d78fe4f-d658-5c70-12f8-4921589024c3 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-29755) PulsarSourceUnorderedE2ECase.testSavepoint failed because of missing TaskManagers
[ https://issues.apache.org/jira/browse/FLINK-29755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633483#comment-17633483 ] Leonard Xu commented on FLINK-29755: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43102&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=15c1d318-5ca8-529f-77a2-d113a700ec34 > PulsarSourceUnorderedE2ECase.testSavepoint failed because of missing > TaskManagers > - > > Key: FLINK-29755 > URL: https://issues.apache.org/jira/browse/FLINK-29755 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.16.0, 1.17.0 >Reporter: Matthias Pohl >Priority: Critical > Labels: test-stability > Attachments: PulsarSourceUnorderedE2ECase.testSavepoint.log > > > [This > build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42325&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a&l=13932] > failed (not exclusively) due to a problem with > {{PulsarSourceUnorderedE2ECase.testSavepoint}}. It seems like there were no > TaskManagers spun up which resulted in the test job failing with a > {{NoResourceAvailableException}}. > {code} > org.apache.flink.runtime.jobmaster.slotpool.DeclarativeSlotPoolBridge [] - > Could not acquire the minimum required resources, failing slot requests. > Acquired: []. Current slot pool status: Registered TMs: 0, registered slots: > 0 free slots: 0 > {code} > I didn't raise this one to critical because it looks like a missing > TaskManager test environment issue. I attached the e2e test-specific logs to > the Jira issue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-29830) PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar failed
[ https://issues.apache.org/jira/browse/FLINK-29830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633482#comment-17633482 ] Leonard Xu commented on FLINK-29830: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43089&view=logs&j=8eee98ee-a482-5f7c-2c51-b3456453e704&t=da58e781-88fe-508b-b74c-018210e533cc > PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar failed > -- > > Key: FLINK-29830 > URL: https://issues.apache.org/jira/browse/FLINK-29830 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.16.0, 1.17.0, 1.15.3 >Reporter: Martijn Visser >Assignee: Yufan Sheng >Priority: Critical > Labels: pull-request-available, test-stability > > {code:java} > Nov 01 01:28:03 [ERROR] Failures: > Nov 01 01:28:03 [ERROR] > PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar:140 > Nov 01 01:28:03 Actual and expected should have same size but actual size is: > Nov 01 01:28:03 0 > Nov 01 01:28:03 while expected size is: > Nov 01 01:28:03 115 > Nov 01 01:28:03 Actual was: > Nov 01 01:28:03 [] > Nov 01 01:28:03 Expected was: > Nov 01 01:28:03 ["AT_LEAST_ONCE-isxrFGAL-0-kO65unDUKX", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-1-4tBNu1UmeR", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-2-9PTnEahlNU", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-3-GjWqEp21yz", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-4-jnbJr9C0w8", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-5-e8Wacz5yDO", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-6-9cW53j3Zcf", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-7-jk8z3m2Aa5", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-8-VU56KmMeiz", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-9-uvMdFxxDAj", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-10-FQyWfwJFbH", > ... > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42680&view=logs&j=aa18c3f6-13b8-5f58-86bb-c1cffb239496&t=502fb6c0-30a2-5e49-c5c2-a00fa3acb203&l=37544 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-29427) LookupJoinITCase failed with classloader problem
[ https://issues.apache.org/jira/browse/FLINK-29427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633481#comment-17633481 ] Leonard Xu commented on FLINK-29427: [~fsk119] Could you take a look this issue? > LookupJoinITCase failed with classloader problem > > > Key: FLINK-29427 > URL: https://issues.apache.org/jira/browse/FLINK-29427 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.16.0, 1.17.0 >Reporter: Huang Xingbo >Assignee: Alexander Smirnov >Priority: Critical > Labels: test-stability > > {code:java} > 2022-09-27T02:49:20.9501313Z Sep 27 02:49:20 Caused by: > org.codehaus.janino.InternalCompilerException: Compiling > "KeyProjection$108341": Trying to access closed classloader. Please check if > you store classloaders directly or indirectly in static fields. If the > stacktrace suggests that the leak occurs in a third party library and cannot > be fixed immediately, you can disable this check with the configuration > 'classloader.check-leaked-classloader'. > 2022-09-27T02:49:20.9502654Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:382) > 2022-09-27T02:49:20.9503366Z Sep 27 02:49:20 at > org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:237) > 2022-09-27T02:49:20.9504044Z Sep 27 02:49:20 at > org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:465) > 2022-09-27T02:49:20.9504704Z Sep 27 02:49:20 at > org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:216) > 2022-09-27T02:49:20.9505341Z Sep 27 02:49:20 at > org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:207) > 2022-09-27T02:49:20.9505965Z Sep 27 02:49:20 at > org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80) > 2022-09-27T02:49:20.9506584Z Sep 27 02:49:20 at > org.codehaus.commons.compiler.Cookable.cook(Cookable.java:75) > 2022-09-27T02:49:20.9507261Z Sep 27 02:49:20 at > org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:104) > 2022-09-27T02:49:20.9507883Z Sep 27 02:49:20 ... 30 more > 2022-09-27T02:49:20.9509266Z Sep 27 02:49:20 Caused by: > java.lang.IllegalStateException: Trying to access closed classloader. Please > check if you store classloaders directly or indirectly in static fields. If > the stacktrace suggests that the leak occurs in a third party library and > cannot be fixed immediately, you can disable this check with the > configuration 'classloader.check-leaked-classloader'. > 2022-09-27T02:49:20.9510835Z Sep 27 02:49:20 at > org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:184) > 2022-09-27T02:49:20.9511760Z Sep 27 02:49:20 at > org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.loadClass(FlinkUserCodeClassLoaders.java:192) > 2022-09-27T02:49:20.9512456Z Sep 27 02:49:20 at > java.lang.Class.forName0(Native Method) > 2022-09-27T02:49:20.9513014Z Sep 27 02:49:20 at > java.lang.Class.forName(Class.java:348) > 2022-09-27T02:49:20.9513649Z Sep 27 02:49:20 at > org.codehaus.janino.ClassLoaderIClassLoader.findIClass(ClassLoaderIClassLoader.java:89) > 2022-09-27T02:49:20.9514339Z Sep 27 02:49:20 at > org.codehaus.janino.IClassLoader.loadIClass(IClassLoader.java:312) > 2022-09-27T02:49:20.9514990Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler.findTypeByName(UnitCompiler.java:8556) > 2022-09-27T02:49:20.9515659Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6749) > 2022-09-27T02:49:20.9516337Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6594) > 2022-09-27T02:49:20.9516989Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6573) > 2022-09-27T02:49:20.9517632Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler.access$13900(UnitCompiler.java:215) > 2022-09-27T02:49:20.9518319Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler$22$1.visitReferenceType(UnitCompiler.java:6481) > 2022-09-27T02:49:20.9519018Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler$22$1.visitReferenceType(UnitCompiler.java:6476) > 2022-09-27T02:49:20.9519680Z Sep 27 02:49:20 at > org.codehaus.janino.Java$ReferenceType.accept(Java.java:3928) > 2022-09-27T02:49:20.9520386Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler$22.visitType(UnitCompiler.java:6476) > 2022-09-27T02:49:20.9521042Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler$22.visitType(UnitCompiler.java:6469) > 2022-09-27T02:49:20.9521677Z Sep 27 02:49:20 at > org.codehaus.janino.Java$ReferenceType.accept(Java.java:3927) > 2022-09-27T02:49:20.9522299Z Sep 27 02:49:20 at > org.codehaus.janino.
[jira] [Updated] (FLINK-30009) OperatorCoordinator.start()'s JavaDoc mismatches its behavior
[ https://issues.apache.org/jira/browse/FLINK-30009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yunfeng Zhou updated FLINK-30009: - Description: The following description lies in the JavaDoc of {{OperatorCoordinator.start()}}. {{This method is called once at the beginning, before any other methods.}} This description is incorrect because the method {{resetToCheckpoint()}} can be invoked before {{start()}}. For example, {{RecreateOnResetOperatorCoordinator.DeferrableCoordinator.resetAndStart()}} uses these methods in this way. Thus the JavaDoc of {{OperatorCoordinator}}'s methods should be modified to match this behavior. was: The following description lies in the JavaDoc of {{OperatorCoordinator.start()}}. {{This method is called once at the beginning, before any other methods.}} This description is incorrect because the method {{resetToCheckpoint()}} can happen before {{start()}} is invoked. For example, {{RecreateOnResetOperatorCoordinator.DeferrableCoordinator.resetAndStart()}} uses these methods in this way. Thus the JavaDoc of {{OperatorCoordinator}}'s methods should be modified to match this behavior. > OperatorCoordinator.start()'s JavaDoc mismatches its behavior > - > > Key: FLINK-30009 > URL: https://issues.apache.org/jira/browse/FLINK-30009 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.16.0 >Reporter: Yunfeng Zhou >Priority: Major > > The following description lies in the JavaDoc of > {{OperatorCoordinator.start()}}. > {{This method is called once at the beginning, before any other methods.}} > This description is incorrect because the method {{resetToCheckpoint()}} can > be invoked before {{start()}}. For example, > {{RecreateOnResetOperatorCoordinator.DeferrableCoordinator.resetAndStart()}} > uses these methods in this way. Thus the JavaDoc of {{OperatorCoordinator}}'s > methods should be modified to match this behavior. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-30009) OperatorCoordinator.start()'s JavaDoc mismatches its behavior
Yunfeng Zhou created FLINK-30009: Summary: OperatorCoordinator.start()'s JavaDoc mismatches its behavior Key: FLINK-30009 URL: https://issues.apache.org/jira/browse/FLINK-30009 Project: Flink Issue Type: Bug Components: Documentation Affects Versions: 1.16.0 Reporter: Yunfeng Zhou The following description lies in the JavaDoc of {{OperatorCoordinator.start()}}. {{This method is called once at the beginning, before any other methods.}} This description is incorrect because the method {{resetToCheckpoint()}} can happen before {{start()}} is invoked. For example, {{RecreateOnResetOperatorCoordinator.DeferrableCoordinator.resetAndStart()}} uses these methods in this way. Thus the JavaDoc of {{OperatorCoordinator}}'s methods should be modified to match this behavior. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-29427) LookupJoinITCase failed with classloader problem
[ https://issues.apache.org/jira/browse/FLINK-29427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633480#comment-17633480 ] Leonard Xu commented on FLINK-29427: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43089&view=logs&j=de826397-1924-5900-0034-51895f69d4b7&t=f311e913-93a2-5a37-acab-4a63e1328f94 > LookupJoinITCase failed with classloader problem > > > Key: FLINK-29427 > URL: https://issues.apache.org/jira/browse/FLINK-29427 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.16.0, 1.17.0 >Reporter: Huang Xingbo >Assignee: Alexander Smirnov >Priority: Critical > Labels: test-stability > > {code:java} > 2022-09-27T02:49:20.9501313Z Sep 27 02:49:20 Caused by: > org.codehaus.janino.InternalCompilerException: Compiling > "KeyProjection$108341": Trying to access closed classloader. Please check if > you store classloaders directly or indirectly in static fields. If the > stacktrace suggests that the leak occurs in a third party library and cannot > be fixed immediately, you can disable this check with the configuration > 'classloader.check-leaked-classloader'. > 2022-09-27T02:49:20.9502654Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:382) > 2022-09-27T02:49:20.9503366Z Sep 27 02:49:20 at > org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:237) > 2022-09-27T02:49:20.9504044Z Sep 27 02:49:20 at > org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:465) > 2022-09-27T02:49:20.9504704Z Sep 27 02:49:20 at > org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:216) > 2022-09-27T02:49:20.9505341Z Sep 27 02:49:20 at > org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:207) > 2022-09-27T02:49:20.9505965Z Sep 27 02:49:20 at > org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80) > 2022-09-27T02:49:20.9506584Z Sep 27 02:49:20 at > org.codehaus.commons.compiler.Cookable.cook(Cookable.java:75) > 2022-09-27T02:49:20.9507261Z Sep 27 02:49:20 at > org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:104) > 2022-09-27T02:49:20.9507883Z Sep 27 02:49:20 ... 30 more > 2022-09-27T02:49:20.9509266Z Sep 27 02:49:20 Caused by: > java.lang.IllegalStateException: Trying to access closed classloader. Please > check if you store classloaders directly or indirectly in static fields. If > the stacktrace suggests that the leak occurs in a third party library and > cannot be fixed immediately, you can disable this check with the > configuration 'classloader.check-leaked-classloader'. > 2022-09-27T02:49:20.9510835Z Sep 27 02:49:20 at > org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:184) > 2022-09-27T02:49:20.9511760Z Sep 27 02:49:20 at > org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.loadClass(FlinkUserCodeClassLoaders.java:192) > 2022-09-27T02:49:20.9512456Z Sep 27 02:49:20 at > java.lang.Class.forName0(Native Method) > 2022-09-27T02:49:20.9513014Z Sep 27 02:49:20 at > java.lang.Class.forName(Class.java:348) > 2022-09-27T02:49:20.9513649Z Sep 27 02:49:20 at > org.codehaus.janino.ClassLoaderIClassLoader.findIClass(ClassLoaderIClassLoader.java:89) > 2022-09-27T02:49:20.9514339Z Sep 27 02:49:20 at > org.codehaus.janino.IClassLoader.loadIClass(IClassLoader.java:312) > 2022-09-27T02:49:20.9514990Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler.findTypeByName(UnitCompiler.java:8556) > 2022-09-27T02:49:20.9515659Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6749) > 2022-09-27T02:49:20.9516337Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6594) > 2022-09-27T02:49:20.9516989Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6573) > 2022-09-27T02:49:20.9517632Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler.access$13900(UnitCompiler.java:215) > 2022-09-27T02:49:20.9518319Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler$22$1.visitReferenceType(UnitCompiler.java:6481) > 2022-09-27T02:49:20.9519018Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler$22$1.visitReferenceType(UnitCompiler.java:6476) > 2022-09-27T02:49:20.9519680Z Sep 27 02:49:20 at > org.codehaus.janino.Java$ReferenceType.accept(Java.java:3928) > 2022-09-27T02:49:20.9520386Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler$22.visitType(UnitCompiler.java:6476) > 2022-09-27T02:49:20.9521042Z Sep 27 02:49:20 at > org.codehaus.janino.UnitCompiler$22.visitType(UnitCompiler.java:6469) > 2022-09-27T02:49:20.9521677Z Sep 27 02:49:20 at > org.codehaus.ja
[jira] [Commented] (FLINK-29830) PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar failed
[ https://issues.apache.org/jira/browse/FLINK-29830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633479#comment-17633479 ] Leonard Xu commented on FLINK-29830: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43089&view=logs&j=8eee98ee-a482-5f7c-2c51-b3456453e704&t=da58e781-88fe-508b-b74c-018210e533cc > PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar failed > -- > > Key: FLINK-29830 > URL: https://issues.apache.org/jira/browse/FLINK-29830 > Project: Flink > Issue Type: Bug > Components: Connectors / Pulsar >Affects Versions: 1.16.0, 1.17.0, 1.15.3 >Reporter: Martijn Visser >Assignee: Yufan Sheng >Priority: Critical > Labels: pull-request-available, test-stability > > {code:java} > Nov 01 01:28:03 [ERROR] Failures: > Nov 01 01:28:03 [ERROR] > PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar:140 > Nov 01 01:28:03 Actual and expected should have same size but actual size is: > Nov 01 01:28:03 0 > Nov 01 01:28:03 while expected size is: > Nov 01 01:28:03 115 > Nov 01 01:28:03 Actual was: > Nov 01 01:28:03 [] > Nov 01 01:28:03 Expected was: > Nov 01 01:28:03 ["AT_LEAST_ONCE-isxrFGAL-0-kO65unDUKX", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-1-4tBNu1UmeR", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-2-9PTnEahlNU", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-3-GjWqEp21yz", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-4-jnbJr9C0w8", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-5-e8Wacz5yDO", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-6-9cW53j3Zcf", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-7-jk8z3m2Aa5", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-8-VU56KmMeiz", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-9-uvMdFxxDAj", > Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-10-FQyWfwJFbH", > ... > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42680&view=logs&j=aa18c3f6-13b8-5f58-86bb-c1cffb239496&t=502fb6c0-30a2-5e49-c5c2-a00fa3acb203&l=37544 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-28394) Python py36-cython: InvocationError for command install_command.sh fails with exit code 1
[ https://issues.apache.org/jira/browse/FLINK-28394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633477#comment-17633477 ] Leonard Xu commented on FLINK-28394: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43087&view=logs&j=9cada3cb-c1d3-5621-16da-0f718fb86602&t=c67e71ed-6451-5d26-8920-5a8cf9651901 > Python py36-cython: InvocationError for command install_command.sh fails with > exit code 1 > - > > Key: FLINK-28394 > URL: https://issues.apache.org/jira/browse/FLINK-28394 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.16.0, 1.15.3 >Reporter: Martijn Visser >Assignee: Huang Xingbo >Priority: Major > Labels: stale-assigned, test-stability > > {code:java} > Jul 05 03:47:22 Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError > Jul 05 03:47:32 Using Python version 3.8.13 (default, Mar 28 2022 11:38:47) > Jul 05 03:47:32 pip_test_code.py success! > Jul 05 03:47:32 py38-cython finish: run-test after 1658.14 seconds > Jul 05 03:47:32 py38-cython start: run-test-post > Jul 05 03:47:32 py38-cython finish: run-test-post after 0.00 seconds > Jul 05 03:47:32 ___ summary > > Jul 05 03:47:32 ERROR: py36-cython: InvocationError for command > /__w/3/s/flink-python/dev/install_command.sh --exists-action w > .tox/.tmp/package/1/apache-flink-1.15.dev0.zip (exited with code 1) > Jul 05 03:47:32 py37-cython: commands succeeded > Jul 05 03:47:32 py38-cython: commands succeeded > Jul 05 03:47:32 cleanup > /__w/3/s/flink-python/.tox/.tmp/package/1/apache-flink-1.15.dev0.zip > Jul 05 03:47:33 tox checks... [FAILED] > Jul 05 03:47:33 Process exited with EXIT CODE: 1. > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37604&view=logs&j=bf5e383b-9fd3-5f02-ca1c-8f788e2e76d3&t=85189c57-d8a0-5c9c-b61d-fc05cfac62cf&l=27789 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] jgrier merged pull request #21278: [FLINK-29962] Exclude jamon 2.3.1 from dependencies
jgrier merged PR #21278: URL: https://github.com/apache/flink/pull/21278 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-29859) TPC-DS end-to-end test with adaptive batch scheduler failed due to oo non-empty .out files.
[ https://issues.apache.org/jira/browse/FLINK-29859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633475#comment-17633475 ] Leonard Xu commented on FLINK-29859: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43077&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a > TPC-DS end-to-end test with adaptive batch scheduler failed due to oo > non-empty .out files. > --- > > Key: FLINK-29859 > URL: https://issues.apache.org/jira/browse/FLINK-29859 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.16.0, 1.17.0 >Reporter: Leonard Xu >Priority: Major > > Nov 03 02:02:12 [FAIL] 'TPC-DS end-to-end test with adaptive batch scheduler' > failed after 21 minutes and 44 seconds! Test exited with exit code 0 but the > logs contained errors, exceptions or non-empty .out files > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42766&view=logs&s=ae4f8708-9994-57d3-c2d7-b892156e7812&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] link3280 commented on pull request #21292: [FLINK-28617][SQL Gateway] Support stop job statement in SqlGatewayService
link3280 commented on PR #21292: URL: https://github.com/apache/flink/pull/21292#issuecomment-1313035136 The CI failed due to an unrelated Kafka connector test. We may take a look at the codes first. cc @fsk119 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] jgrier commented on pull request #21278: [FLINK-29962] Exclude jamon 2.3.1 from dependencies
jgrier commented on PR #21278: URL: https://github.com/apache/flink/pull/21278#issuecomment-1313034406 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] shuiqiangchen commented on pull request #20745: [FLINK-28988] Don't push above filters down into the right table for temporal join
shuiqiangchen commented on PR #20745: URL: https://github.com/apache/flink/pull/20745#issuecomment-1313023060 @lincoln-lil Thank you for having a look at the pr. I would like to finish this work. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-25255) Expose Changelog checkpoints via State Processor API
[ https://issues.apache.org/jira/browse/FLINK-25255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yanfei Lei updated FLINK-25255: --- Fix Version/s: 1.17.0 > Expose Changelog checkpoints via State Processor API > > > Key: FLINK-25255 > URL: https://issues.apache.org/jira/browse/FLINK-25255 > Project: Flink > Issue Type: New Feature > Components: API / State Processor, Runtime / State Backends >Reporter: Piotr Nowojski >Priority: Minor > Fix For: 1.17.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-24402) Add a metric for back-pressure from the ChangelogStateBackend
[ https://issues.apache.org/jira/browse/FLINK-24402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yanfei Lei updated FLINK-24402: --- Fix Version/s: 1.17.0 > Add a metric for back-pressure from the ChangelogStateBackend > - > > Key: FLINK-24402 > URL: https://issues.apache.org/jira/browse/FLINK-24402 > Project: Flink > Issue Type: New Feature > Components: Runtime / Checkpointing, Runtime / Metrics, Runtime / > State Backends >Reporter: Roman Khachatryan >Priority: Major > Fix For: 1.17.0 > > > FLINK-23381 adds back-pressure, this task is to add monitoring for that. > See design doc: > https://docs.google.com/document/d/1k5WkWIYzs3n3GYQC76H9BLGxvN3wuq7qUHJuBPR9YX0/edit#heading=h.ayt6cka7z0qf > Can be reported as back-pressured by backend per second, similar to how > "regular" back-pressure is currently reported > ([prototype|https://github.com/rkhachatryan/flink/tree/clsb-bp-test]). > Metric name: stateBackendBlockedTimeMsPerSecond > Take into account: > * there is blocking and non-blocking waiting for changelog availability (see > [https://github.com/apache/flink/pull/17229#discussion_r740111285)] > * UI needs to be adjusted in several places: Task label; Task details > * Back-pressure status label should probably be adjusted > * If changelog is disabled then the metric shouldn't be shown > Consider whether to include changelog back-pressure into overall > back-pressure > (https://github.com/apache/flink/pull/17229#discussion_r738322138 ). > > Uploading metrics should be added in FLINK-23486. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] link3280 commented on pull request #21292: [FLINK-28617][SQL Gateway] Support stop job statement in SqlGatewayService
link3280 commented on PR #21292: URL: https://github.com/apache/flink/pull/21292#issuecomment-1313003298 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-25512) Materialization Files are not cleaned up if no checkpoint is using it
[ https://issues.apache.org/jira/browse/FLINK-25512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yanfei Lei updated FLINK-25512: --- Fix Version/s: 1.17.0 > Materialization Files are not cleaned up if no checkpoint is using it > - > > Key: FLINK-25512 > URL: https://issues.apache.org/jira/browse/FLINK-25512 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends >Affects Versions: 1.15.0 >Reporter: Yuan Mei >Assignee: Nicholas Jiang >Priority: Minor > Labels: stale-assigned > Fix For: 1.17.0 > > > This can happen if no checkpoint succeeds within the materialization interval. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-ml] yunfengzhou-hub commented on a diff in pull request #172: [FLINK-29592] Add Estimator and Transformer for RobustScaler
yunfengzhou-hub commented on code in PR #172: URL: https://github.com/apache/flink-ml/pull/172#discussion_r1021035116 ## flink-ml-lib/src/main/java/org/apache/flink/ml/feature/robustscaler/RobustScaler.java: ## @@ -0,0 +1,183 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.ml.feature.robustscaler; + +import org.apache.flink.api.common.functions.AggregateFunction; +import org.apache.flink.api.common.functions.MapFunction; +import org.apache.flink.ml.api.Estimator; +import org.apache.flink.ml.common.datastream.DataStreamUtils; +import org.apache.flink.ml.common.util.QuantileSummary; +import org.apache.flink.ml.linalg.DenseVector; +import org.apache.flink.ml.linalg.Vector; +import org.apache.flink.ml.param.Param; +import org.apache.flink.ml.util.ParamUtils; +import org.apache.flink.ml.util.ReadWriteUtils; +import org.apache.flink.streaming.api.datastream.DataStream; +import org.apache.flink.table.api.Table; +import org.apache.flink.table.api.bridge.java.StreamTableEnvironment; +import org.apache.flink.table.api.internal.TableImpl; +import org.apache.flink.types.Row; +import org.apache.flink.util.Preconditions; + +import java.io.IOException; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Map; +import java.util.stream.Collectors; + +/** + * Scale features using statistics that are robust to outliers. + * + * This Scaler removes the median and scales the data according to the quantile range (defaults + * to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and + * the 3rd quartile (75th quantile) but can be configured. + * + * Centering and scaling happen independently on each feature by computing the relevant + * statistics on the samples in the training set. Median and quantile range are then stored to be + * used on later data using the transform method. + * + * Standardization of a dataset is a common requirement for many machine learning estimators. + * Typically this is done by removing the mean and scaling to unit variance. However, outliers can + * often influence the sample mean / variance in a negative way. In such cases, the median and the + * interquartile range often give better results. Review Comment: Sorry that I mistook the meaning as "the median range and the interquartile range". I agree that there is no grammar error now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #21311: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source
flinkbot commented on PR #21311: URL: https://github.com/apache/flink/pull/21311#issuecomment-1312961368 ## CI report: * 1126862ce143600a501013f86c8855a96525a42e UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] lincoln-lil commented on pull request #20745: [FLINK-28988] Don't push above filters down into the right table for temporal join
lincoln-lil commented on PR #20745: URL: https://github.com/apache/flink/pull/20745#issuecomment-1312961086 @shuiqiangchen I found this pr while combing through the list of sql related legacy issues, and recently we fixed another similar user case on event time temporal join in FLINK-29849 includes two problems: 1. ChangelogNormalize incorrectly added for upsert source; 2. incorrectly filter pushdown and for the second one, I think your solution that only prevent pushing down filter related to the right side of input for the event time temporal join is better, would you like continue this work and fixing the failed tests first?And after this was done, the pr for FLINK-29849 can remove the filter part and based on your fix. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] luoyuxia commented on pull request #21302: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source
luoyuxia commented on PR #21302: URL: https://github.com/apache/flink/pull/21302#issuecomment-1312960002 @leonardBang 1.16: https://github.com/apache/flink/pull/21310 1.15: https://github.com/apache/flink/pull/21309 1.14: https://github.com/apache/flink/pull/21311 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] luoyuxia commented on pull request #21311: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source
luoyuxia commented on PR #21311: URL: https://github.com/apache/flink/pull/21311#issuecomment-1312959501 Let's wait the ci pass -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] luoyuxia opened a new pull request, #21311: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source
luoyuxia opened a new pull request, #21311: URL: https://github.com/apache/flink/pull/21311
[GitHub] [flink] flinkbot commented on pull request #21310: [FLINK-29992][hive] Fix lookup join fail with Hive table as lookup table source
flinkbot commented on PR #21310: URL: https://github.com/apache/flink/pull/21310#issuecomment-1312958494 ## CI report: * d66ef307290cc55167b7b4d4e1c615f9b5658783 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] luoyuxia opened a new pull request, #21310: [FLINK-29992][hive] Fix lookup join fail with Hive table as lookup table source
luoyuxia opened a new pull request, #21310: URL: https://github.com/apache/flink/pull/21310 This closes #21302. Backport for #21302 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #21309: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source
flinkbot commented on PR #21309: URL: https://github.com/apache/flink/pull/21309#issuecomment-1312953496 ## CI report: * dc5be9947ecc61b72ff44ce997c6519dd5944286 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] luoyuxia commented on pull request #21309: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source
luoyuxia commented on PR #21309: URL: https://github.com/apache/flink/pull/21309#issuecomment-1312952537 Let's wait the ci pass -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] luoyuxia opened a new pull request, #21309: [FLINK-29992][hive] Fix Hive lookup join fail when column pushdown to…
luoyuxia opened a new pull request, #21309: URL: https://github.com/apache/flink/pull/21309 … Hive lookup table source This closes ##21302. Backport for #21302. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-29992) Join execution plan parsing error
[ https://issues.apache.org/jira/browse/FLINK-29992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633438#comment-17633438 ] Leonard Xu commented on FLINK-29992: master:a4f9bfd1483ef64b0ed167bd29c98596e3bd5f49 release-1.16: TODO release-1.15: TODO release-1.14: TODO > Join execution plan parsing error > - > > Key: FLINK-29992 > URL: https://issues.apache.org/jira/browse/FLINK-29992 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.16.0, 1.17.0, 1.15.3 >Reporter: HunterXHunter >Assignee: luoyuxia >Priority: Major > Labels: pull-request-available > > {code:java} > // > tableEnv.executeSql(" CREATE CATALOG hive WITH (\n" > + " 'type' = 'hive',\n" > + " 'default-database' = 'flinkdebug',\n" > + " 'hive-conf-dir' = '/programe/hadoop/hive-3.1.2/conf'\n" > + " )"); > tableEnv.executeSql("create table datagen_tbl (\n" > + "id STRING\n" > + ",name STRING\n" > + ",age bigint\n" > + ",ts bigint\n" > + ",`par` STRING\n" > + ",pro_time as PROCTIME()\n" > + ") with (\n" > + " 'connector'='datagen'\n" > + ",'rows-per-second'='10'\n" > + " \n" > + ")"); > String dml1 = "select * " > + " from datagen_tbl as p " > + " join hive.flinkdebug.default_hive_src_tbl " > + " FOR SYSTEM_TIME AS OF p.pro_time AS c" > + " ON p.id = c.id"; > // Execution succeeded > System.out.println(tableEnv.explainSql(dml1)); > String dml2 = "select p.id " > + " from datagen_tbl as p " > + " join hive.flinkdebug.default_hive_src_tbl " > + " FOR SYSTEM_TIME AS OF p.pro_time AS c" > + " ON p.id = c.id"; > // Throw an exception > System.out.println(tableEnv.explainSql(dml2)); {code} > {code:java} > org.apache.flink.table.api.TableException: Cannot generate a valid execution > plan for the given query: FlinkLogicalCalc(select=[id]) +- > FlinkLogicalJoin(condition=[=($0, $1)], joinType=[inner]) :- > FlinkLogicalCalc(select=[id]) : +- > FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, > datagen_tbl]], fields=[id, name, age, ts, par]) +- > FlinkLogicalSnapshot(period=[$cor1.pro_time]) +- > FlinkLogicalTableSourceScan(table=[[hive, flinkdebug, default_hive_src_tbl, > project=[id]]], fields=[id])This exception indicates that the query uses an > unsupported SQL feature. Please check the documentation for the set of > currently supported SQL features. at > org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59) > > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] leonardBang commented on pull request #21302: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source
leonardBang commented on PR #21302: URL: https://github.com/apache/flink/pull/21302#issuecomment-1312940296 @luoyuxia Could you also open PRs for release-1.14 , release-1.14 and release-1.16 ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] leonardBang merged pull request #21302: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source
leonardBang merged PR #21302: URL: https://github.com/apache/flink/pull/21302 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-29992) Join execution plan parsing error
[ https://issues.apache.org/jira/browse/FLINK-29992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leonard Xu updated FLINK-29992: --- Affects Version/s: 1.15.3 > Join execution plan parsing error > - > > Key: FLINK-29992 > URL: https://issues.apache.org/jira/browse/FLINK-29992 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.16.0, 1.17.0, 1.15.3 >Reporter: HunterXHunter >Assignee: luoyuxia >Priority: Major > Labels: pull-request-available > > {code:java} > // > tableEnv.executeSql(" CREATE CATALOG hive WITH (\n" > + " 'type' = 'hive',\n" > + " 'default-database' = 'flinkdebug',\n" > + " 'hive-conf-dir' = '/programe/hadoop/hive-3.1.2/conf'\n" > + " )"); > tableEnv.executeSql("create table datagen_tbl (\n" > + "id STRING\n" > + ",name STRING\n" > + ",age bigint\n" > + ",ts bigint\n" > + ",`par` STRING\n" > + ",pro_time as PROCTIME()\n" > + ") with (\n" > + " 'connector'='datagen'\n" > + ",'rows-per-second'='10'\n" > + " \n" > + ")"); > String dml1 = "select * " > + " from datagen_tbl as p " > + " join hive.flinkdebug.default_hive_src_tbl " > + " FOR SYSTEM_TIME AS OF p.pro_time AS c" > + " ON p.id = c.id"; > // Execution succeeded > System.out.println(tableEnv.explainSql(dml1)); > String dml2 = "select p.id " > + " from datagen_tbl as p " > + " join hive.flinkdebug.default_hive_src_tbl " > + " FOR SYSTEM_TIME AS OF p.pro_time AS c" > + " ON p.id = c.id"; > // Throw an exception > System.out.println(tableEnv.explainSql(dml2)); {code} > {code:java} > org.apache.flink.table.api.TableException: Cannot generate a valid execution > plan for the given query: FlinkLogicalCalc(select=[id]) +- > FlinkLogicalJoin(condition=[=($0, $1)], joinType=[inner]) :- > FlinkLogicalCalc(select=[id]) : +- > FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, > datagen_tbl]], fields=[id, name, age, ts, par]) +- > FlinkLogicalSnapshot(period=[$cor1.pro_time]) +- > FlinkLogicalTableSourceScan(table=[[hive, flinkdebug, default_hive_src_tbl, > project=[id]]], fields=[id])This exception indicates that the query uses an > unsupported SQL feature. Please check the documentation for the set of > currently supported SQL features. at > org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59) > > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-29992) Join execution plan parsing error
[ https://issues.apache.org/jira/browse/FLINK-29992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leonard Xu reassigned FLINK-29992: -- Assignee: luoyuxia > Join execution plan parsing error > - > > Key: FLINK-29992 > URL: https://issues.apache.org/jira/browse/FLINK-29992 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.16.0, 1.17.0 >Reporter: HunterXHunter >Assignee: luoyuxia >Priority: Major > Labels: pull-request-available > > {code:java} > // > tableEnv.executeSql(" CREATE CATALOG hive WITH (\n" > + " 'type' = 'hive',\n" > + " 'default-database' = 'flinkdebug',\n" > + " 'hive-conf-dir' = '/programe/hadoop/hive-3.1.2/conf'\n" > + " )"); > tableEnv.executeSql("create table datagen_tbl (\n" > + "id STRING\n" > + ",name STRING\n" > + ",age bigint\n" > + ",ts bigint\n" > + ",`par` STRING\n" > + ",pro_time as PROCTIME()\n" > + ") with (\n" > + " 'connector'='datagen'\n" > + ",'rows-per-second'='10'\n" > + " \n" > + ")"); > String dml1 = "select * " > + " from datagen_tbl as p " > + " join hive.flinkdebug.default_hive_src_tbl " > + " FOR SYSTEM_TIME AS OF p.pro_time AS c" > + " ON p.id = c.id"; > // Execution succeeded > System.out.println(tableEnv.explainSql(dml1)); > String dml2 = "select p.id " > + " from datagen_tbl as p " > + " join hive.flinkdebug.default_hive_src_tbl " > + " FOR SYSTEM_TIME AS OF p.pro_time AS c" > + " ON p.id = c.id"; > // Throw an exception > System.out.println(tableEnv.explainSql(dml2)); {code} > {code:java} > org.apache.flink.table.api.TableException: Cannot generate a valid execution > plan for the given query: FlinkLogicalCalc(select=[id]) +- > FlinkLogicalJoin(condition=[=($0, $1)], joinType=[inner]) :- > FlinkLogicalCalc(select=[id]) : +- > FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, > datagen_tbl]], fields=[id, name, age, ts, par]) +- > FlinkLogicalSnapshot(period=[$cor1.pro_time]) +- > FlinkLogicalTableSourceScan(table=[[hive, flinkdebug, default_hive_src_tbl, > project=[id]]], fields=[id])This exception indicates that the query uses an > unsupported SQL feature. Please check the documentation for the set of > currently supported SQL features. at > org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59) > > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] FlechazoW closed pull request #21295: [Typo] Fix the typo 'retriable', which means 'retryable'.
FlechazoW closed pull request #21295: [Typo] Fix the typo 'retriable', which means 'retryable'. URL: https://github.com/apache/flink/pull/21295 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org