[GitHub] [flink] flinkbot edited a comment on pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
flinkbot edited a comment on pull request #12266: URL: https://github.com/apache/flink/pull/12266#issuecomment-631427812 ## CI report: * d3268f7bfdf1dfa2c19dc2b38c80a4ee84a5f26c Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1943) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12244: [FLINK-17258][network] Fix couple of ITCases that were failing with enabled unaligned checkpoints
flinkbot edited a comment on pull request #12244: URL: https://github.com/apache/flink/pull/12244#issuecomment-630723509 ## CI report: * 3dcc9233af810b8be408665c0083fab404a2dea5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1833) * 7fa2068a283b9471384248c1bf301e3d406b5f48 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1951) * d0d2042ea348654430863ccb51084c30714d8a47 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1959) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED
flinkbot edited a comment on pull request #12269: URL: https://github.com/apache/flink/pull/12269#issuecomment-631541996 ## CI report: * 24c44fd00652a6b5859075b3afea1e4e9ca98445 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1960) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112350#comment-17112350 ] Andrey Zagrebin edited comment on FLINK-17822 at 5/20/20, 3:23 PM: --- FLINK-15758 did not export jdk.internal.misc package used by reflection for GC cleaners of managed memory. Our PR CI does not run Java 11 tests atm. The package has to be exported by a JVM runtime arg: --add-opens java.base/jdk.internal.misc=ALL-UNNAMED; If this arg is set for Java 8, it fails the JVM process. Therefore, the fix is complicated as we have to do it also for e.g. Yarn CLI where client and cluster may run different Java versions. An alternative quicker fix is to call directly the private method (has to be made accessible via reflection): - java.lang.ref.Reference.tryHandlePending(false) // for Java 8 - java.lang.ref.Reference.waitForReferenceProcessing() // for Java 11 Unfortunately, this leads to the annoying warning for Java 11: {code:java} WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunnerProvider (file:/Users/azagrebin/projects/flink/flink-core/target/classes/) to method java.lang.ref.Reference.waitForReferenceProcessing() WARNING: Please consider reporting this to the maintainers of org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunnerProvider WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release {code} We can do the quick fix and think how to tackle the warning in a follow-up. was (Author: azagrebin): FLINK-15758 did not export jdk.internal.misc package used by reflection for GC cleaners of managed memory. Our PR CI does not run Java 11 tests atm. The package has to be exported by a JVM runtime arg: --add-opens java.base/jdk.internal.misc=ALL-UNNAMED; If this arg is set for Java 8, it fails the JVM process. Therefore, the fix is complicated as we have to do it also for e.g. Yarn CLI where client and cluster may run different Java versions. An alternative quicker fix is to call directly the private method (has to be made accessible via reflection): - java.lang.ref.Reference.tryHandlePending(false) // for Java 8 - java.lang.ref.Reference.waitForReferenceProcessing() // for Java 11 This though leads to an annoying warning for Java 11: {code:java} WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunnerProvider (file:/Users/azagrebin/projects/flink/flink-core/target/classes/) to method java.lang.ref.Reference.waitForReferenceProcessing() WARNING: Please consider reporting this to the maintainers of org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunnerProvider WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release {code} We can do the quick fix and think how to fix the warning in a follow-up. > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in Java 11 > -- > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug > Components: Runtime / Task, Tests >Affects Versions: 1.11.0 >Reporter: Dian Fu >Assignee: Andrey Zagrebin >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19
[jira] [Commented] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112350#comment-17112350 ] Andrey Zagrebin commented on FLINK-17822: - FLINK-15758 did not export jdk.internal.misc package used by reflection for GC cleaners of managed memory. Our PR CI does not run Java 11 tests atm. The package has to be exported by a JVM runtime arg: --add-opens java.base/jdk.internal.misc=ALL-UNNAMED; If this arg is set for Java 8, it fails the JVM process. Therefore, the fix is complicated as we have to do it also for e.g. Yarn CLI where client and cluster may run different Java versions. An alternative quicker fix is to call directly the private method (has to be made accessible via reflection): - java.lang.ref.Reference.tryHandlePending(false) // for Java 8 - java.lang.ref.Reference.waitForReferenceProcessing() // for Java 11 This though leads to an annoying warning for Java 11: {code:java} WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunnerProvider (file:/Users/azagrebin/projects/flink/flink-core/target/classes/) to method java.lang.ref.Reference.waitForReferenceProcessing() WARNING: Please consider reporting this to the maintainers of org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunnerProvider WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release {code} We can do the quick fix and think how to fix the warning in a follow-up. > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in Java 11 > -- > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug > Components: Runtime / Task, Tests >Affects Versions: 1.11.0 >Reporter: Dian Fu >Assignee: Andrey Zagrebin >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19T21:59:39.8832596Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8833667Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8834712Z at > org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8835686Z at > org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8836652Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8838033Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8839259Z at > org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8840148Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841035Z
[GitHub] [flink] dawidwys commented on pull request #12189: [FLINK-17376][API/DataStream]Deprecated methods and related code updated
dawidwys commented on pull request #12189: URL: https://github.com/apache/flink/pull/12189#issuecomment-631543033 Hey @mghildiy do you still want to work on this issue? This issue actually blocks the 1.11 release. I see you haven't addressed @aljoscha's comment yet. It's fine, but if you don't have time, maybe somebody else could take over. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED
flinkbot commented on pull request #12269: URL: https://github.com/apache/flink/pull/12269#issuecomment-631541996 ## CI report: * 24c44fd00652a6b5859075b3afea1e4e9ca98445 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12267: [FLINK-17842][network] Fix performance regression in SpanningWrapper#clear
flinkbot edited a comment on pull request #12267: URL: https://github.com/apache/flink/pull/12267#issuecomment-631489829 ## CI report: * 0afb379748084b4aef0fdf51c57e24044dfc31df Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1949) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12244: [FLINK-17258][network] Fix couple of ITCases that were failing with enabled unaligned checkpoints
flinkbot edited a comment on pull request #12244: URL: https://github.com/apache/flink/pull/12244#issuecomment-630723509 ## CI report: * 3dcc9233af810b8be408665c0083fab404a2dea5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1833) * 7fa2068a283b9471384248c1bf301e3d406b5f48 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1951) * d0d2042ea348654430863ccb51084c30714d8a47 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-17675) Resolve CVE-2019-11358 from jquery
[ https://issues.apache.org/jira/browse/FLINK-17675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger closed FLINK-17675. -- Fix Version/s: 1.12.0 1.11.0 Resolution: Fixed Merged to master / 1.12.0 in https://github.com/apache/flink/commit/ffc4ae1a3d9bf678268a3d09d5d3d14caf6cddfb Merged to release-1.11 / 1.11.0 in 861efd05e382fa122520ab253f4278fa37bb2bad > Resolve CVE-2019-11358 from jquery > -- > > Key: FLINK-17675 > URL: https://issues.apache.org/jira/browse/FLINK-17675 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: Koala Lam >Assignee: Robert Metzger >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0, 1.12.0 > > > https://nvd.nist.gov/vuln/detail/CVE-2019-11358 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] rmetzger closed pull request #12229: [FLINK-17675][docs] Update jquery dependency to 3.5.1
rmetzger closed pull request #12229: URL: https://github.com/apache/flink/pull/12229 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-17846) flink-walkthrough-table-scala failed on azure
[ https://issues.apache.org/jira/browse/FLINK-17846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Wysakowicz closed FLINK-17846. Resolution: Fixed Fixed in: master: 400df32ab2a18a52c8511a901a4dd22cb6827700 1.11: 8044166877efc42c42a80344992d66ba18748a4d > flink-walkthrough-table-scala failed on azure > - > > Key: FLINK-17846 > URL: https://issues.apache.org/jira/browse/FLINK-17846 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Tests >Affects Versions: 1.11.0 >Reporter: Dawid Wysakowicz >Assignee: Dawid Wysakowicz >Priority: Blocker > Fix For: 1.11.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1941&view=results > {code} > 2020-05-20T14:38:59.8981285Z [WARNING] > org.apache.flink:flink-scala_2.11:1.12-SNAPSHOT requires scala version: > 2.11.12 > 2020-05-20T14:38:59.8982349Z [WARNING] org.scala-lang:scala-compiler:2.11.12 > requires scala version: 2.11.12 > 2020-05-20T14:38:59.8983299Z [WARNING] > org.scala-lang.modules:scala-xml_2.11:1.0.5 requires scala version: 2.11.7 > 2020-05-20T14:38:59.8983897Z [WARNING] Multiple versions of scala libraries > detected! > 2020-05-20T14:38:59.8984777Z [INFO] > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-53406840715/flink-walkthrough-table-scala/src/main/scala:-1: > info: compiling > 2020-05-20T14:38:59.8986393Z [INFO] Compiling 1 source files to > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-53406840715/flink-walkthrough-table-scala/target/classes > at 1589985538160 > 2020-05-20T14:38:59.8987734Z [ERROR] > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-53406840715/flink-walkthrough-table-scala/src/main/scala/org/apache/flink/walkthrough/SpendReport.scala:28: > error: not found: value BatchTableEnvironment > 2020-05-20T14:38:59.8988549Z [ERROR] val tEnv = > BatchTableEnvironment.create(env) > 2020-05-20T14:38:59.8988905Z [ERROR]^ > 2020-05-20T14:38:59.8989186Z [ERROR] one error found > 2020-05-20T14:38:59.8990571Z [INFO] > > 2020-05-20T14:38:59.8991177Z [INFO] BUILD FAILURE > 2020-05-20T14:38:59.8992000Z [INFO] > > 2020-05-20T14:38:59.8992556Z [INFO] Total time: 3.627 s > 2020-05-20T14:38:59.8993292Z [INFO] Finished at: 2020-05-20T14:38:59+00:00 > 2020-05-20T14:38:59.8993939Z [INFO] Final Memory: 21M/305M > 2020-05-20T14:38:59.8994935Z [INFO] > > 2020-05-20T14:38:59.8996009Z [ERROR] Failed to execute goal > net.alchim31.maven:scala-maven-plugin:3.2.2:compile (default) on project > flink-walkthrough-table-scala: wrap: > org.apache.commons.exec.ExecuteException: Process exited with an error: 1 > (Exit value: 1) -> [Help 1] > 2020-05-20T14:38:59.8996670Z [ERROR] > 2020-05-20T14:38:59.8997248Z [ERROR] To see the full stack trace of the > errors, re-run Maven with the -e switch. > 2020-05-20T14:38:59.8997936Z [ERROR] Re-run Maven using the -X switch to > enable full debug logging. > 2020-05-20T14:38:59.8998292Z [ERROR] > 2020-05-20T14:38:59.8998695Z [ERROR] For more information about the errors > and possible solutions, please read the following articles: > 2020-05-20T14:38:59.8999194Z [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12264: [FLINK-17558][netty] Release partitions asynchronously
flinkbot edited a comment on pull request #12264: URL: https://github.com/apache/flink/pull/12264#issuecomment-631349883 ## CI report: * 19c5f57b94cc56b70002031618c32d9e6f68effb UNKNOWN * 9dbaf3094c0942b96a01060aba9d4ffbad9d1857 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1934) * bb313e40f5a72dbf20cd0a8b48267063fd4f00af UNKNOWN * eafbd98c812227cb7d9ce7158de1a23309855509 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1948) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] StephanEwen commented on pull request #12234: [FLINK-16986][coordination] Provide exactly-once guaranteed around checkpoints and operator event sending
StephanEwen commented on pull request #12234: URL: https://github.com/apache/flink/pull/12234#issuecomment-631529855 Kindly asking that @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12268: [FLINK-17375] Refactor travis_watchdog.sh into separate ci and azure scripts.
flinkbot edited a comment on pull request #12268: URL: https://github.com/apache/flink/pull/12268#issuecomment-631512695 ## CI report: * 4ed6888375869e654816264124703e72439c6148 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1955) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
flinkbot edited a comment on pull request #12263: URL: https://github.com/apache/flink/pull/12263#issuecomment-631274882 ## CI report: * 320f0a551c635e98c4aff4af6d853d3cf2681fee Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1944) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-14894) HybridOffHeapUnsafeMemorySegmentTest#testByteBufferWrap failed on Travis
[ https://issues.apache.org/jira/browse/FLINK-14894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Zagrebin closed FLINK-14894. --- Resolution: Fixed should be fixed by FLINK-15758 > HybridOffHeapUnsafeMemorySegmentTest#testByteBufferWrap failed on Travis > > > Key: FLINK-14894 > URL: https://issues.apache.org/jira/browse/FLINK-14894 > Project: Flink > Issue Type: Bug > Components: Runtime / Network, Tests >Affects Versions: 1.10.0 >Reporter: Gary Yao >Assignee: Andrey Zagrebin >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.11.0, 1.10.2 > > Time Spent: 20m > Remaining Estimate: 0h > > {noformat} > HybridOffHeapUnsafeMemorySegmentTest>MemorySegmentTestBase.testByteBufferWrapping:2465 > expected:<992288337> but was:<196608> > {noformat} > https://api.travis-ci.com/v3/job/258950527/log.txt -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17846) flink-walkthrough-table-scala failed on azure
[ https://issues.apache.org/jira/browse/FLINK-17846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Wysakowicz updated FLINK-17846: - Priority: Blocker (was: Major) > flink-walkthrough-table-scala failed on azure > - > > Key: FLINK-17846 > URL: https://issues.apache.org/jira/browse/FLINK-17846 > Project: Flink > Issue Type: Bug > Components: Table SQL / API, Tests >Affects Versions: 1.11.0 >Reporter: Dawid Wysakowicz >Assignee: Dawid Wysakowicz >Priority: Blocker > Fix For: 1.11.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1941&view=results > {code} > 2020-05-20T14:38:59.8981285Z [WARNING] > org.apache.flink:flink-scala_2.11:1.12-SNAPSHOT requires scala version: > 2.11.12 > 2020-05-20T14:38:59.8982349Z [WARNING] org.scala-lang:scala-compiler:2.11.12 > requires scala version: 2.11.12 > 2020-05-20T14:38:59.8983299Z [WARNING] > org.scala-lang.modules:scala-xml_2.11:1.0.5 requires scala version: 2.11.7 > 2020-05-20T14:38:59.8983897Z [WARNING] Multiple versions of scala libraries > detected! > 2020-05-20T14:38:59.8984777Z [INFO] > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-53406840715/flink-walkthrough-table-scala/src/main/scala:-1: > info: compiling > 2020-05-20T14:38:59.8986393Z [INFO] Compiling 1 source files to > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-53406840715/flink-walkthrough-table-scala/target/classes > at 1589985538160 > 2020-05-20T14:38:59.8987734Z [ERROR] > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-53406840715/flink-walkthrough-table-scala/src/main/scala/org/apache/flink/walkthrough/SpendReport.scala:28: > error: not found: value BatchTableEnvironment > 2020-05-20T14:38:59.8988549Z [ERROR] val tEnv = > BatchTableEnvironment.create(env) > 2020-05-20T14:38:59.8988905Z [ERROR]^ > 2020-05-20T14:38:59.8989186Z [ERROR] one error found > 2020-05-20T14:38:59.8990571Z [INFO] > > 2020-05-20T14:38:59.8991177Z [INFO] BUILD FAILURE > 2020-05-20T14:38:59.8992000Z [INFO] > > 2020-05-20T14:38:59.8992556Z [INFO] Total time: 3.627 s > 2020-05-20T14:38:59.8993292Z [INFO] Finished at: 2020-05-20T14:38:59+00:00 > 2020-05-20T14:38:59.8993939Z [INFO] Final Memory: 21M/305M > 2020-05-20T14:38:59.8994935Z [INFO] > > 2020-05-20T14:38:59.8996009Z [ERROR] Failed to execute goal > net.alchim31.maven:scala-maven-plugin:3.2.2:compile (default) on project > flink-walkthrough-table-scala: wrap: > org.apache.commons.exec.ExecuteException: Process exited with an error: 1 > (Exit value: 1) -> [Help 1] > 2020-05-20T14:38:59.8996670Z [ERROR] > 2020-05-20T14:38:59.8997248Z [ERROR] To see the full stack trace of the > errors, re-run Maven with the -e switch. > 2020-05-20T14:38:59.8997936Z [ERROR] Re-run Maven using the -X switch to > enable full debug logging. > 2020-05-20T14:38:59.8998292Z [ERROR] > 2020-05-20T14:38:59.8998695Z [ERROR] For more information about the errors > and possible solutions, please read the following articles: > 2020-05-20T14:38:59.8999194Z [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-17846) flink-walkthrough-table-scala failed on azure
Dawid Wysakowicz created FLINK-17846: Summary: flink-walkthrough-table-scala failed on azure Key: FLINK-17846 URL: https://issues.apache.org/jira/browse/FLINK-17846 Project: Flink Issue Type: Bug Components: Table SQL / API, Tests Affects Versions: 1.11.0 Reporter: Dawid Wysakowicz Assignee: Dawid Wysakowicz Fix For: 1.11.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1941&view=results {code} 2020-05-20T14:38:59.8981285Z [WARNING] org.apache.flink:flink-scala_2.11:1.12-SNAPSHOT requires scala version: 2.11.12 2020-05-20T14:38:59.8982349Z [WARNING] org.scala-lang:scala-compiler:2.11.12 requires scala version: 2.11.12 2020-05-20T14:38:59.8983299Z [WARNING] org.scala-lang.modules:scala-xml_2.11:1.0.5 requires scala version: 2.11.7 2020-05-20T14:38:59.8983897Z [WARNING] Multiple versions of scala libraries detected! 2020-05-20T14:38:59.8984777Z [INFO] /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-53406840715/flink-walkthrough-table-scala/src/main/scala:-1: info: compiling 2020-05-20T14:38:59.8986393Z [INFO] Compiling 1 source files to /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-53406840715/flink-walkthrough-table-scala/target/classes at 1589985538160 2020-05-20T14:38:59.8987734Z [ERROR] /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-53406840715/flink-walkthrough-table-scala/src/main/scala/org/apache/flink/walkthrough/SpendReport.scala:28: error: not found: value BatchTableEnvironment 2020-05-20T14:38:59.8988549Z [ERROR] val tEnv = BatchTableEnvironment.create(env) 2020-05-20T14:38:59.8988905Z [ERROR]^ 2020-05-20T14:38:59.8989186Z [ERROR] one error found 2020-05-20T14:38:59.8990571Z [INFO] 2020-05-20T14:38:59.8991177Z [INFO] BUILD FAILURE 2020-05-20T14:38:59.8992000Z [INFO] 2020-05-20T14:38:59.8992556Z [INFO] Total time: 3.627 s 2020-05-20T14:38:59.8993292Z [INFO] Finished at: 2020-05-20T14:38:59+00:00 2020-05-20T14:38:59.8993939Z [INFO] Final Memory: 21M/305M 2020-05-20T14:38:59.8994935Z [INFO] 2020-05-20T14:38:59.8996009Z [ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.2:compile (default) on project flink-walkthrough-table-scala: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit value: 1) -> [Help 1] 2020-05-20T14:38:59.8996670Z [ERROR] 2020-05-20T14:38:59.8997248Z [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. 2020-05-20T14:38:59.8997936Z [ERROR] Re-run Maven using the -X switch to enable full debug logging. 2020-05-20T14:38:59.8998292Z [ERROR] 2020-05-20T14:38:59.8998695Z [ERROR] For more information about the errors and possible solutions, please read the following articles: 2020-05-20T14:38:59.8999194Z [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED
flinkbot commented on pull request #12269: URL: https://github.com/apache/flink/pull/12269#issuecomment-631524496 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 8da849c3217f1ff94fcf65f63c5eea7f9cd49ed5 (Wed May 20 14:52:21 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17351) CheckpointCoordinator and CheckpointFailureManager ignores checkpoint timeouts
[ https://issues.apache.org/jira/browse/FLINK-17351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17351: --- Labels: pull-request-available (was: ) > CheckpointCoordinator and CheckpointFailureManager ignores checkpoint timeouts > -- > > Key: FLINK-17351 > URL: https://issues.apache.org/jira/browse/FLINK-17351 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.9.2, 1.10.0 >Reporter: Piotr Nowojski >Assignee: Yuan Mei >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > > As described in point 2: > https://issues.apache.org/jira/browse/FLINK-17327?focusedCommentId=17090576&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17090576 > (copy of description from above linked comment): > The logic in how {{CheckpointCoordinator}} handles checkpoint timeouts is > broken. In your [~qinjunjerry] examples, your job should have failed after > first checkpoint failure, but checkpoints were time outing on > CheckpointCoordinator after 5 seconds, before {{FlinkKafkaProducer}} was > detecting Kafka failure after 2 minutes. Those timeouts were not checked > against {{setTolerableCheckpointFailureNumber(...)}} limit, so the job was > keep going with many timed out checkpoints. Now funny thing happens: > FlinkKafkaProducer detects Kafka failure. Funny thing is that it depends > where the failure was detected: > a) on processing record? no problem, job will failover immediately once > failure is detected (in this example after 2 minutes) > b) on checkpoint? heh, the failure is reported to {{CheckpointCoordinator}} > *and gets ignored, as PendingCheckpoint has already been discarded 2 minutes > ago* :) So theoretically the checkpoints can keep failing forever and the job > will not restart automatically, unless something else fails. > Even more funny things can happen if we mix FLINK-17350 . or b) with > intermittent external system failure. Sink reports an exception, transaction > was lost/aborted, Sink is in failed state, but if there will be a happy > coincidence that it manages to accept further records, this exception can be > lost and all of the records in those failed checkpoints will be lost forever > as well. In all of the examples that [~qinjunjerry] posted it hasn't > happened. {{FlinkKafkaProducer}} was not able to recover after the initial > failure and it was keep throwing exceptions until the job finally failed (but > much later then it should have). And that's not guaranteed anywhere. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17817) CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase
[ https://issues.apache.org/jira/browse/FLINK-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112304#comment-17112304 ] Dawid Wysakowicz commented on FLINK-17817: -- I will assign you [~TsReaper] the issue, as I think you are working on it, ok? > CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase > - > > Key: FLINK-17817 > URL: https://issues.apache.org/jira/browse/FLINK-17817 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1826&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=f83cd372-208c-5ec4-12a8-337462457129 > {code} > 2020-05-19T10:34:18.3224679Z [ERROR] > testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase) > Time elapsed: 7.537 s <<< ERROR! > 2020-05-19T10:34:18.3225273Z java.lang.RuntimeException: Failed to fetch next > result > 2020-05-19T10:34:18.3227634Z at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:92) > 2020-05-19T10:34:18.3228518Z at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:63) > 2020-05-19T10:34:18.3229170Z at > org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.addAll(Iterators.java:361) > 2020-05-19T10:34:18.3229863Z at > org.apache.flink.shaded.guava18.com.google.common.collect.Lists.newArrayList(Lists.java:160) > 2020-05-19T10:34:18.3230586Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300) > 2020-05-19T10:34:18.3231303Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:141) > 2020-05-19T10:34:18.3231996Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:107) > 2020-05-19T10:34:18.3232847Z at > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable(AggregateReduceGroupingITCase.scala:176) > 2020-05-19T10:34:18.3233694Z at > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg(AggregateReduceGroupingITCase.scala:122) > 2020-05-19T10:34:18.3234461Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-19T10:34:18.3234983Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-19T10:34:18.3235632Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-19T10:34:18.3236615Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-19T10:34:18.3237256Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-19T10:34:18.3237965Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-19T10:34:18.3238750Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-19T10:34:18.3239314Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-19T10:34:18.3239838Z at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2020-05-19T10:34:18.3240362Z at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2020-05-19T10:34:18.3240803Z at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2020-05-19T10:34:18.3243624Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2020-05-19T10:34:18.3244531Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-19T10:34:18.3245325Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-19T10:34:18.3246086Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-19T10:34:18.3246765Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-19T10:34:18.3247390Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-19T10:34:18.3248012Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-19T10:34:18.3248779Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-19T10:34:18.3249417Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-19T10:34:18.3250357Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.
[jira] [Assigned] (FLINK-17817) CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase
[ https://issues.apache.org/jira/browse/FLINK-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Wysakowicz reassigned FLINK-17817: Assignee: Caizhi Weng > CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase > - > > Key: FLINK-17817 > URL: https://issues.apache.org/jira/browse/FLINK-17817 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Assignee: Caizhi Weng >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1826&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=f83cd372-208c-5ec4-12a8-337462457129 > {code} > 2020-05-19T10:34:18.3224679Z [ERROR] > testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase) > Time elapsed: 7.537 s <<< ERROR! > 2020-05-19T10:34:18.3225273Z java.lang.RuntimeException: Failed to fetch next > result > 2020-05-19T10:34:18.3227634Z at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:92) > 2020-05-19T10:34:18.3228518Z at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:63) > 2020-05-19T10:34:18.3229170Z at > org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.addAll(Iterators.java:361) > 2020-05-19T10:34:18.3229863Z at > org.apache.flink.shaded.guava18.com.google.common.collect.Lists.newArrayList(Lists.java:160) > 2020-05-19T10:34:18.3230586Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300) > 2020-05-19T10:34:18.3231303Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:141) > 2020-05-19T10:34:18.3231996Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:107) > 2020-05-19T10:34:18.3232847Z at > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable(AggregateReduceGroupingITCase.scala:176) > 2020-05-19T10:34:18.3233694Z at > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg(AggregateReduceGroupingITCase.scala:122) > 2020-05-19T10:34:18.3234461Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-19T10:34:18.3234983Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-19T10:34:18.3235632Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-19T10:34:18.3236615Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-19T10:34:18.3237256Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-19T10:34:18.3237965Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-19T10:34:18.3238750Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-19T10:34:18.3239314Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-19T10:34:18.3239838Z at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2020-05-19T10:34:18.3240362Z at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2020-05-19T10:34:18.3240803Z at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2020-05-19T10:34:18.3243624Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2020-05-19T10:34:18.3244531Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-19T10:34:18.3245325Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-19T10:34:18.3246086Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-19T10:34:18.3246765Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-19T10:34:18.3247390Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-19T10:34:18.3248012Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-19T10:34:18.3248779Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-19T10:34:18.3249417Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-19T10:34:18.3250357Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2020-05-19T10:34:18.3251021Z at > org.junit.rules.Extern
[GitHub] [flink] curcur opened a new pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED
curcur opened a new pull request #12269: URL: https://github.com/apache/flink/pull/12269 **(The sections below can be removed for hotfixes of typos)** --> ## What is the purpose of the change Before this PR, `CHECKPOINT_EXPIRED` is not counted in `continuousFailureCounter`. Hence, if the failure of checkpointing is detected after checkpoint times out, the failure get ignored since the `PendingCheckpoint` has already been discarded, leading the job unable to restart automatically in theory unless something else fails. This PR counts `CHECKPOINT_EXPIRED` in `continuousFailureCounter`. ## Brief change log - `CHECKPOINT_EXPIRED` is counted in `CheckpointFailureManager#continuousFailureCounter`. ## Verifying this change unit tests `CheckpointCoordinatorTest#testExpiredCheckpointExceedsTolerableFailureNumber` ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: Checkpointing - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no - If yes, how is the feature documented? not applicable This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-17845) Can't remove a table connector property with ALTER TABLE
Fabian Hueske created FLINK-17845: - Summary: Can't remove a table connector property with ALTER TABLE Key: FLINK-17845 URL: https://issues.apache.org/jira/browse/FLINK-17845 Project: Flink Issue Type: Bug Components: Table SQL / API Reporter: Fabian Hueske It is not possible to remove an existing table property from a table. Looking at the [source code|https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/sqlexec/SqlToOperationConverter.java#L295] this seems to be the intended semantics, but it seems counter-intuitive to me. If I create a table with the following statement: {code} CREATE TABLE `testTable` ( id INT ) WITH ( 'connector.type' = 'kafka', 'connector.version' = 'universal', 'connector.topicX' = 'test', -- Woops, I made a typo here [...] ) {code} The statement will be successfully executed. However, the table cannot be used due to the typo. Fixing the typo with the following DDL is not possible: {code} ALTER TABLE `testTable` SET ( 'connector.type' = 'kafka', 'connector.version' = 'universal', 'connector.topic' = 'test', -- Fixing the typo ) {code} because the key {{connector.topicX}} is not removed. Right now it seems that the only way to fix a table with an invalid key is to DROP and CREATE it. I think that this use case should be supported by ALTER TABLE. I would even argue that the expected behavior is that previous properties are removed and replaced by the new properties. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17762) Postgres Catalog should pass table's primary key to catalogTable
[ https://issues.apache.org/jira/browse/FLINK-17762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-17762. --- Fix Version/s: 1.11.0 Resolution: Fixed > Postgres Catalog should pass table's primary key to catalogTable > > > Key: FLINK-17762 > URL: https://issues.apache.org/jira/browse/FLINK-17762 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Ecosystem >Reporter: Leonard Xu >Priority: Major > Fix For: 1.11.0 > > > for upsert query, if the table comes from a catalog rather than create in > FLINK, Postgres Catalog should pass table's primary key to catalogTable so > that JdbcDynamicTableSink can determine to work on upsert mode or append only > mode. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17356) Pass table's primary key to catalog table in PostgresCatalog
[ https://issues.apache.org/jira/browse/FLINK-17356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-17356. --- Fix Version/s: 1.11.0 Resolution: Fixed Add IT cases for inserting group by query into posgres catalog table - master (1.12.0): fa3768a82fd880178f5c8cb71c28510dd4db4d30 - 1.11.0: a83ee6c90b605f0807a40c82f2f5879f80f1f2dd Support PK and Unique constraints - master (1.12.0): 38ada4ad5ece2d28707e9403278133d8e5790ec0 - 1.11.0: b37626bd2a43f9a39a954ef63a924da23a2c3825 > Pass table's primary key to catalog table in PostgresCatalog > > > Key: FLINK-17356 > URL: https://issues.apache.org/jira/browse/FLINK-17356 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC, Table SQL / Ecosystem >Reporter: Flavio Pompermaier >Assignee: Flavio Pompermaier >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > At the moment the PostgresCatalog does not create field constraints (at the > moment there's only UNIQUE and PRIMARY_KEY in the TableSchema..could it > worth to add also NOT_NULL?) > We only pass primary key to catalog table for now. UNIQUE and NOT NULL > information will be future work. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] Jiayi-Liao commented on a change in pull request #12243: [FLINK-17805][networ] Fix ArrayIndexOutOfBound for rotated input gate indexes
Jiayi-Liao commented on a change in pull request #12243: URL: https://github.com/apache/flink/pull/12243#discussion_r428063945 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/InputProcessorUtil.java ## @@ -79,11 +80,26 @@ public static CheckpointedInputGate createCheckpointedInputGate( unionedInputGates[i] = InputGateUtil.createInputGate(inputGates[i].toArray(new IndexedInputGate[0])); } + IntStream numberOfInputChannelsPerGate = + Arrays + .stream(inputGates) + .flatMap(collection -> collection.stream()) + .sorted(Comparator.comparingInt(IndexedInputGate::getGateIndex)) + .mapToInt(InputGate::getNumberOfInputChannels); + Map inputGateToChannelIndexOffset = generateInputGateToChannelIndexOffsetMap(unionedInputGates); + // Note that numberOfInputChannelsPerGate and inputGateToChannelIndexOffset have a bit different Review comment: You're right. I didn't notice that `inputGateToChannelIndexOffset`'s key is an unioned InputGate. Thanks for pointing this out. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12244: [FLINK-17258][network] Fix couple of ITCases that were failing with enabled unaligned checkpoints
flinkbot edited a comment on pull request #12244: URL: https://github.com/apache/flink/pull/12244#issuecomment-630723509 ## CI report: * 3dcc9233af810b8be408665c0083fab404a2dea5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1833) * 7fa2068a283b9471384248c1bf301e3d406b5f48 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1951) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12268: [FLINK-17375] Refactor travis_watchdog.sh into separate ci and azure scripts.
flinkbot commented on pull request #12268: URL: https://github.com/apache/flink/pull/12268#issuecomment-631512695 ## CI report: * 4ed6888375869e654816264124703e72439c6148 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12243: [FLINK-17805][networ] Fix ArrayIndexOutOfBound for rotated input gate indexes
flinkbot edited a comment on pull request #12243: URL: https://github.com/apache/flink/pull/12243#issuecomment-630723410 ## CI report: * a3be362324a56a5f9b118a09ea3552a3039acffe Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1950) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17356) Pass table's primary key to catalog table in PostgresCatalog
[ https://issues.apache.org/jira/browse/FLINK-17356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-17356: Description: At the moment the PostgresCatalog does not create field constraints (at the moment there's only UNIQUE and PRIMARY_KEY in the TableSchema..could it worth to add also NOT_NULL?) We only pass primary key to catalog table for now. UNIQUE and NOT NULL information will be future work. was:At the moment the PostgresCatalog does not create field constraints (at the moment there's only UNIQUE and PRIMARY_KEY in the TableSchema..could it worth to add also NOT_NULL?) > Pass table's primary key to catalog table in PostgresCatalog > > > Key: FLINK-17356 > URL: https://issues.apache.org/jira/browse/FLINK-17356 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC, Table SQL / Ecosystem >Reporter: Flavio Pompermaier >Assignee: Flavio Pompermaier >Priority: Major > Labels: pull-request-available > > At the moment the PostgresCatalog does not create field constraints (at the > moment there's only UNIQUE and PRIMARY_KEY in the TableSchema..could it > worth to add also NOT_NULL?) > We only pass primary key to catalog table for now. UNIQUE and NOT NULL > information will be future work. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17356) Pass table's primary key to catalog table in PostgresCatalog
[ https://issues.apache.org/jira/browse/FLINK-17356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-17356: Summary: Pass table's primary key to catalog table in PostgresCatalog (was: Properly set constraints (PK and UNIQUE)) > Pass table's primary key to catalog table in PostgresCatalog > > > Key: FLINK-17356 > URL: https://issues.apache.org/jira/browse/FLINK-17356 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC, Table SQL / Ecosystem >Reporter: Flavio Pompermaier >Assignee: Flavio Pompermaier >Priority: Major > Labels: pull-request-available > > At the moment the PostgresCatalog does not create field constraints (at the > moment there's only UNIQUE and PRIMARY_KEY in the TableSchema..could it > worth to add also NOT_NULL?) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-15503) FileUploadHandlerTest.testMixedMultipart and FileUploadHandlerTest. testUploadCleanupOnUnknownAttribute failed on Azure
[ https://issues.apache.org/jira/browse/FLINK-15503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112275#comment-17112275 ] Robert Metzger commented on FLINK-15503: I cancelled my test after 800 successful test runs. The slowest test run was 32.6 seconds. > FileUploadHandlerTest.testMixedMultipart and FileUploadHandlerTest. > testUploadCleanupOnUnknownAttribute failed on Azure > --- > > Key: FLINK-15503 > URL: https://issues.apache.org/jira/browse/FLINK-15503 > Project: Flink > Issue Type: Bug > Components: Runtime / REST, Tests >Affects Versions: 1.10.0 >Reporter: Till Rohrmann >Priority: Critical > Labels: test-stability > Fix For: 1.10.0 > > > The tests {{FileUploadHandlerTest.testMixedMultipart}} and > {{FileUploadHandlerTest. testUploadCleanupOnUnknownAttribute}} failed on > Azure with > {code} > 2020-01-07T09:32:06.9840445Z [ERROR] > testUploadCleanupOnUnknownAttribute(org.apache.flink.runtime.rest.FileUploadHandlerTest) > Time elapsed: 12.457 s <<< ERROR! > 2020-01-07T09:32:06.9850865Z java.net.SocketTimeoutException: timeout > 2020-01-07T09:32:06.9851650Z at > org.apache.flink.runtime.rest.FileUploadHandlerTest.testUploadCleanupOnUnknownAttribute(FileUploadHandlerTest.java:234) > 2020-01-07T09:32:06.9852910Z Caused by: java.net.SocketException: Socket > closed > 2020-01-07T09:32:06.9853465Z at > org.apache.flink.runtime.rest.FileUploadHandlerTest.testUploadCleanupOnUnknownAttribute(FileUploadHandlerTest.java:234) > 2020-01-07T09:32:06.9853855Z > 2020-01-07T09:32:06.9854362Z [ERROR] > testMixedMultipart(org.apache.flink.runtime.rest.FileUploadHandlerTest) Time > elapsed: 10.091 s <<< ERROR! > 2020-01-07T09:32:06.9855125Z java.net.SocketTimeoutException: Read timed out > 2020-01-07T09:32:06.9855652Z at > org.apache.flink.runtime.rest.FileUploadHandlerTest.testMixedMultipart(FileUploadHandlerTest.java:154) > 2020-01-07T09:32:06.9856034Z > {code} > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=4159&view=results -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong closed pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints
wuchong closed pull request #11906: URL: https://github.com/apache/flink/pull/11906 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-16922) DecimalData.toUnscaledBytes should be consistent with BigDecimla.unscaledValue.toByteArray
[ https://issues.apache.org/jira/browse/FLINK-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-16922. --- Resolution: Fixed master (1.12.0): 34671add8a435ee4431f4c1c4da37a8e078b7a8a 1.11.0: f7356560145f2bb862d1608264de3cf476f4abba > DecimalData.toUnscaledBytes should be consistent with > BigDecimla.unscaledValue.toByteArray > -- > > Key: FLINK-16922 > URL: https://issues.apache.org/jira/browse/FLINK-16922 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Reporter: Jingsong Lee >Assignee: Jark Wu >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > > In Decimal: > {code:java} > public byte[] toUnscaledBytes() { >if (!isCompact()) { > return toBigDecimal().unscaledValue().toByteArray(); >} >// big endian; consistent with BigInteger.toByteArray() >byte[] bytes = new byte[8]; >long l = longVal; >for (int i = 0; i < 8; i++) { > bytes[7 - i] = (byte) l; > l >>>= 8; >} >return bytes; > } > {code} > When is compact, it will return fix 8 length byte array. > This should not happen, it brings an incompatible byte array. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17750) YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint failed on azure
[ https://issues.apache.org/jira/browse/FLINK-17750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-17750: -- Fix Version/s: 1.11.0 > YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint failed on > azure > --- > > Key: FLINK-17750 > URL: https://issues.apache.org/jira/browse/FLINK-17750 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.11.0 >Reporter: Roman Khachatryan >Priority: Critical > Labels: test-stability > Fix For: 1.11.0 > > > [https://dev.azure.com/khachatryanroman/810e80cc-0656-4d3c-9d8c-186764456a01/_apis/build/builds/6/logs/156] > > {code:java} > 2020-05-15T23:42:29.5307581Z [ERROR] > testKillYarnSessionClusterEntrypoint(org.apache.flink.yarn.YARNHighAvailabilityITCase) > Time elapsed: 21.68 s <<< ERROR! > 2020-05-15T23:42:29.5308406Z java.util.concurrent.ExecutionException: > 2020-05-15T23:42:29.5308864Z > org.apache.flink.runtime.rest.util.RestClientException: [Internal server > error., 2020-05-15T23:42:29.5309678Z java.util.concurrent.TimeoutException: > Invocation of public abstract java.util.concurrent.CompletableFuture > org.apache.flink.runt > ime.dispatcher.DispatcherGateway.requestJob(org.apache.flink.api.common.JobID,org.apache.flink.api.common.time.Time) > timed out. > 2020-05-15T23:42:29.5310322Zat com.sun.proxy.$Proxy33.requestJob(Unknown > Source) > 2020-05-15T23:42:29.5311018Zat > org.apache.flink.runtime.rest.handler.legacy.DefaultExecutionGraphCache.getExecutionGraphInternal(DefaultExecutionGraphCach > e.java:103) > 2020-05-15T23:42:29.5311704Zat > org.apache.flink.runtime.rest.handler.legacy.DefaultExecutionGraphCache.getExecutionGraph(DefaultExecutionGraphCache.java:7 > 1) > 2020-05-15T23:42:29.5312355Zat > org.apache.flink.runtime.rest.handler.job.AbstractExecutionGraphHandler.handleRequest(AbstractExecutionGraphHandler.java:75 > ) > 2020-05-15T23:42:29.5312924Zat > org.apache.flink.runtime.rest.handler.AbstractRestHandler.respondToRequest(AbstractRestHandler.java:73) > 2020-05-15T23:42:29.5313423Zat > org.apache.flink.runtime.rest.handler.AbstractHandler.respondAsLeader(AbstractHandler.java:172) > 2020-05-15T23:42:29.5314497Zat > org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.lambda$channelRead0$0(LeaderRetrievalHandler.java:81) > 2020-05-15T23:42:29.5315083Zat > java.util.Optional.ifPresent(Optional.java:159) > 2020-05-15T23:42:29.5315474Zat > org.apache.flink.util.OptionalConsumer.ifPresent(OptionalConsumer.java:46) > 2020-05-15T23:42:29.5315979Zat > org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:78) > 2020-05-15T23:42:29.5316520Zat > org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:49) > 2020-05-15T23:42:29.5317092Zat > org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:10 > 5) > 2020-05-15T23:42:29.5317705Zat > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerConte > xt.java:374) > 2020-05-15T23:42:29.5318586Zat > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerConte > xt.java:360) > 2020-05-15T23:42:29.5319249Zat > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext > .java:352) > 2020-05-15T23:42:29.5319729Zat > org.apache.flink.runtime.rest.handler.router.RouterHandler.routed(RouterHandler.java:110) > 2020-05-15T23:42:29.5320136Zat > org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:89) > 2020-05-15T23:42:29.5320742Zat > org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:54) > 2020-05-15T23:42:29.5321195Zat > org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:10 > 5) > 2020-05-15T23:42:29.5321730Zat > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerConte > xt.java:374) > 2020-05-15T23:42:29.5322263Zat > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerConte > xt.java:360) > 2020-05-15T23:42:29.5322806Zat > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext > .java:352) > 2020-05-15T23:42:29.5323335Zat > org.apache.f
[GitHub] [flink] wuchong merged pull request #12265: [FLINK-16922][table-common] Fix DecimalData.toUnscaledBytes() should be consistent with BigDecimla.unscaledValue.toByteArray()
wuchong merged pull request #12265: URL: https://github.com/apache/flink/pull/12265 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints
wuchong commented on pull request #11906: URL: https://github.com/apache/flink/pull/11906#issuecomment-631504885 Passed. Merging... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17844) Activate japicmp-maven-plugin checks for @PublicEvolving between bug fix releases (x.y.u -> x.y.v)
[ https://issues.apache.org/jira/browse/FLINK-17844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-17844: -- Fix Version/s: 1.11.0 > Activate japicmp-maven-plugin checks for @PublicEvolving between bug fix > releases (x.y.u -> x.y.v) > -- > > Key: FLINK-17844 > URL: https://issues.apache.org/jira/browse/FLINK-17844 > Project: Flink > Issue Type: New Feature > Components: Build System >Reporter: Till Rohrmann >Priority: Critical > Fix For: 1.11.0 > > > According to > https://lists.apache.org/thread.html/rc58099fb0e31d0eac951a7bbf7f8bda8b7b65c9ed0c04622f5333745%40%3Cdev.flink.apache.org%3E, > the community has decided to establish stricter API and binary stability > guarantees. Concretely, the community voted to guarantee API and binary > stability for {{@PublicEvolving}} annotated classes between bug fix release > (x.y.u -> x.y.v). > Hence, I would suggest to activate this check by adding a new > {{japicmp-maven-plugin}} entry into Flink's {{pom.xml}} which checks for > {{@PublicEvolving}} classes between bug fix releases. We might have to update > the release guide to also include updating this configuration entry. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] tillrohrmann commented on a change in pull request #12264: [FLINK-17558][netty] Release partitions asynchronously
tillrohrmann commented on a change in pull request #12264: URL: https://github.com/apache/flink/pull/12264#discussion_r428047035 ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/TaskExecutorPartitionLifecycleTest.java ## @@ -280,7 +273,65 @@ public void testClusterPartitionRelease() throws Exception { ); } - private void testPartitionRelease(PartitionTrackerSetup partitionTrackerSetup, TestAction testAction) throws Exception { + @Test + public void testBlockingLocalPartitionReleaseDoesNotBlockTaskExecutor() throws Exception { + BlockerSync sync = new BlockerSync(); + ResultPartitionManager blockingResultPartitionManager = new ResultPartitionManager() { + @Override + public void releasePartition(ResultPartitionID partitionId, Throwable cause) { + sync.blockNonInterruptible(); + super.releasePartition(partitionId, cause); + } + }; + + NettyShuffleEnvironment shuffleEnvironment = new NettyShuffleEnvironmentBuilder() + .setResultPartitionManager(blockingResultPartitionManager) + .setIoExecutor(java.util.concurrent.Executors.newFixedThreadPool(1)) Review comment: I would suggest to also shut this executor service down at the end of the test. It might be necessary to unblock the release operation for this. ## File path: flink-core/src/main/java/org/apache/flink/configuration/TaskManagerOptions.java ## @@ -490,6 +490,13 @@ + " size will be used. The exact size of JVM Overhead can be explicitly specified by setting the min/max" + " size to the same value."); + @Documentation.ExcludeFromDocumentation("This option just serves as a last-ditch escape hatch.") + public static final ConfigOption NUM_IO_THREADS = + key("taskmanager.io.threads.num") + .intType() + .defaultValue(2) + .withDescription("The number of threads to use for non-critical IO operations."); Review comment: We might be able to unify this configuration option with `ClusterOptions.CLUSTER_IO_EXECUTOR_POOL_SIZE`. ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskManagerServices.java ## @@ -265,10 +265,15 @@ public static TaskManagerServices fromConfiguration( // start the I/O manager, it will create some temp directories. final IOManager ioManager = new IOManagerAsync(taskManagerServicesConfiguration.getTmpDirPaths()); + final ExecutorService ioExecutor = Executors.newFixedThreadPool( Review comment: Can the `ioExecutor` also replace the `taskIOExecutor`? ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/io/network/NettyShuffleEnvironmentTest.java ## @@ -100,6 +105,27 @@ public void testRegisterTaskWithInsufficientBuffers() throws Exception { testRegisterTaskWithLimitedBuffers(bufferCount); } + @Test + public void testSlowIODoesNotBlockRelease() throws Exception { + BlockerSync sync = new BlockerSync(); Review comment: I guess a `OneShotLatch` would also work here if the test threads call the trigger on it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12244: [FLINK-17258][network] Fix couple of ITCases that were failing with enabled unaligned checkpoints
flinkbot edited a comment on pull request #12244: URL: https://github.com/apache/flink/pull/12244#issuecomment-630723509 ## CI report: * 3dcc9233af810b8be408665c0083fab404a2dea5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1833) * 7fa2068a283b9471384248c1bf301e3d406b5f48 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12267: [FLINK-17842][network] Fix performance regression in SpanningWrapper#clear
flinkbot edited a comment on pull request #12267: URL: https://github.com/apache/flink/pull/12267#issuecomment-631489829 ## CI report: * 0afb379748084b4aef0fdf51c57e24044dfc31df Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1949) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12264: [FLINK-17558][netty] Release partitions asynchronously
flinkbot edited a comment on pull request #12264: URL: https://github.com/apache/flink/pull/12264#issuecomment-631349883 ## CI report: * 19c5f57b94cc56b70002031618c32d9e6f68effb UNKNOWN * 9dbaf3094c0942b96a01060aba9d4ffbad9d1857 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1934) * bb313e40f5a72dbf20cd0a8b48267063fd4f00af UNKNOWN * eafbd98c812227cb7d9ce7158de1a23309855509 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints
flinkbot edited a comment on pull request #11906: URL: https://github.com/apache/flink/pull/11906#issuecomment-619214462 ## CI report: * 2e339ca93fcf4461ddb3502b49ab34083fc96cf6 UNKNOWN * 1310d3ed1bad9e2356a320128cac125e930831dc Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1932) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12243: [FLINK-17805][networ] Fix ArrayIndexOutOfBound for rotated input gate indexes
flinkbot edited a comment on pull request #12243: URL: https://github.com/apache/flink/pull/12243#issuecomment-630723410 ## CI report: * b956522108b0344ff004e859c0bc399dc8c38348 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1832) * a3be362324a56a5f9b118a09ea3552a3039acffe UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12179: [FLINK-16144] get client.timeout for the client, with a fallback to the akka.client…
flinkbot edited a comment on pull request #12179: URL: https://github.com/apache/flink/pull/12179#issuecomment-629283467 ## CI report: * beb5343f2d9e91881e3c02cd0ef19230f22e21a9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1496) * 439f5bb5f125322835886d5f9e12cb07a5625fcb UNKNOWN * 8725524bf20ae0a2a149be98090845b172c65cf6 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] rkhachatryan commented on a change in pull request #12244: [FLINK-17258][network] Fix couple of ITCases that were failing with enabled unaligned checkpoints
rkhachatryan commented on a change in pull request #12244: URL: https://github.com/apache/flink/pull/12244#discussion_r428043879 ## File path: flink-tests/src/test/java/org/apache/flink/test/classloading/ClassLoaderITCase.java ## @@ -300,7 +300,8 @@ public void testCheckpointingCustomKvStateJobWithCustomClassLoader() throws IOEx */ @Test public void testDisposeSavepointWithCustomKvState() throws Exception { - ClusterClient clusterClient = new MiniClusterClient(new Configuration(), miniClusterResource.getMiniCluster()); + Configuration configuration = new Configuration(); Review comment: nit: I guess it was extracted to disable unaligned checkpoints, but then CLI argument was used; so this variable can be inlined back. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12181: [FLINK-17645][runtime] Reset SafetyNetCloseableRegistry#REAPER_THREAD if it fails to start
flinkbot edited a comment on pull request #12181: URL: https://github.com/apache/flink/pull/12181#issuecomment-629344595 ## CI report: * fbefe16eb3f7769b6daf6cfe1fa26b7a0f7130a8 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1930) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-15534) YARNSessionCapacitySchedulerITCase#perJobYarnClusterWithParallelism failed due to NPE
[ https://issues.apache.org/jira/browse/FLINK-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112250#comment-17112250 ] Robert Metzger commented on FLINK-15534: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1929&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=6b04ca5f-0b52-511d-19c9-52bf0d9fbdfa > YARNSessionCapacitySchedulerITCase#perJobYarnClusterWithParallelism failed > due to NPE > - > > Key: FLINK-15534 > URL: https://issues.apache.org/jira/browse/FLINK-15534 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.11.0 >Reporter: Yu Li >Assignee: Yang Wang >Priority: Blocker > > As titled, travis run fails with below error: > {code} > 07:29:22.417 [ERROR] > perJobYarnClusterWithParallelism(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase) > Time elapsed: 16.263 s <<< ERROR! > java.lang.NullPointerException: > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:128) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.getApplicationResourceUsageReport(RMAppAttemptImpl.java:900) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.createAndGetApplicationReport(RMAppImpl.java:660) > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:930) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:273) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:507) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486) > at > org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase.perJobYarnClusterWithParallelism(YARNSessionCapacitySchedulerITCase.java:405) > Caused by: org.apache.hadoop.ipc.RemoteException: > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:128) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.getApplicationResourceUsageReport(RMAppAttemptImpl.java:900) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.createAndGetApplicationReport(RMAppImpl.java:660) > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:930) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:273) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:507) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486) > at > org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase.perJobYarnClusterWithParallelism(YARNSessionCapacitySchedulerITCase.java:405) > {code} > https://api.travis-ci.org/v3/job/634588108/log.txt -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #12268: [FLINK-17375] Refactor travis_watchdog.sh into separate ci and azure scripts.
flinkbot commented on pull request #12268: URL: https://github.com/apache/flink/pull/12268#issuecomment-631497626 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 4ed6888375869e654816264124703e72439c6148 (Wed May 20 14:09:51 UTC 2020) **Warnings:** * Documentation files were touched, but no `.zh.md` files: Update Chinese documentation or file Jira ticket. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17375) Clean up CI system related scripts
[ https://issues.apache.org/jira/browse/FLINK-17375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17375: --- Labels: pull-request-available (was: ) > Clean up CI system related scripts > -- > > Key: FLINK-17375 > URL: https://issues.apache.org/jira/browse/FLINK-17375 > Project: Flink > Issue Type: Sub-task > Components: Build System, Build System / Azure Pipelines >Reporter: Robert Metzger >Assignee: Robert Metzger >Priority: Major > Labels: pull-request-available > > Once we have only one CI system in place for Flink (again), it makes sense to > clean up the available scripts: > - Separate "Azure-specific" from "CI-generic" files (names of files, methods, > build profiles) > - separate "log handling" from "build timeout" in "travis_watchdog" > - remove workarounds needed because of Travis limitations -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] rmetzger opened a new pull request #12268: [FLINK-17375] Refactor travis_watchdog.sh into separate ci and azure scripts.
rmetzger opened a new pull request #12268: URL: https://github.com/apache/flink/pull/12268 ## What is the purpose of the change Clean up the CI-related scripts in `tools/`. ## Brief change log For reviewing this change, I recommend starting from the `job-template.yml` file to see how the scripts are connected. - travis_watchdog.sh used to be a combination of things: test stage control (also python test invocation), debug artifact management (mostly uploading them), test timeout control. The biggest issue was how the python tests were integrated into that file. I moved the "watchdog" functionality into a separate file, and created a new `test_controller.sh`. - azure_controller.sh used to be the entry point for the CI system, controlling the compile stage. I moved most of the stuff into the `tools/ci/compile.sh` ## Verifying this change I have tested timing out builds (both for regular maven/surefire/java timeouts and python) to make sure the refactored watchdog works, and exit codes are properly forwarded. Once the PR has reached an acceptable state, I will also test the nightly builds on my personal Azure account to make sure the python wheels definition works. The separation of changes into separate commits is not optimal (some YARN changes are a bit unrelated in the refactoring commit) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12265: [FLINK-16922][table-common] Fix DecimalData.toUnscaledBytes() should be consistent with BigDecimla.unscaledValue.toByteArray()
flinkbot edited a comment on pull request #12265: URL: https://github.com/apache/flink/pull/12265#issuecomment-631389712 ## CI report: * 4f4662a0211a334a8033d317b57cd8755677c744 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1935) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12264: [FLINK-17558][netty] Release partitions asynchronously
flinkbot edited a comment on pull request #12264: URL: https://github.com/apache/flink/pull/12264#issuecomment-631349883 ## CI report: * 19c5f57b94cc56b70002031618c32d9e6f68effb UNKNOWN * 9dbaf3094c0942b96a01060aba9d4ffbad9d1857 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1934) * bb313e40f5a72dbf20cd0a8b48267063fd4f00af UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12267: [FLINK-17842][network] Fix performance regression in SpanningWrapper#clear
flinkbot commented on pull request #12267: URL: https://github.com/apache/flink/pull/12267#issuecomment-631489829 ## CI report: * 0afb379748084b4aef0fdf51c57e24044dfc31df UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-17844) Activate japicmp-maven-plugin checks for @PublicEvolving between bug fix releases (x.y.u -> x.y.v)
Till Rohrmann created FLINK-17844: - Summary: Activate japicmp-maven-plugin checks for @PublicEvolving between bug fix releases (x.y.u -> x.y.v) Key: FLINK-17844 URL: https://issues.apache.org/jira/browse/FLINK-17844 Project: Flink Issue Type: New Feature Components: Build System Reporter: Till Rohrmann According to https://lists.apache.org/thread.html/rc58099fb0e31d0eac951a7bbf7f8bda8b7b65c9ed0c04622f5333745%40%3Cdev.flink.apache.org%3E, the community has decided to establish stricter API and binary stability guarantees. Concretely, the community voted to guarantee API and binary stability for {{@PublicEvolving}} annotated classes between bug fix release (x.y.u -> x.y.v). Hence, I would suggest to activate this check by adding a new {{japicmp-maven-plugin}} entry into Flink's {{pom.xml}} which checks for {{@PublicEvolving}} classes between bug fix releases. We might have to update the release guide to also include updating this configuration entry. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
flinkbot edited a comment on pull request #12263: URL: https://github.com/apache/flink/pull/12263#issuecomment-631274882 ## CI report: * 0e1d9cde275d0717fb9b32f6d1a3aed600c33166 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1933) * 320f0a551c635e98c4aff4af6d853d3cf2681fee Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1944) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12179: [FLINK-16144] get client.timeout for the client, with a fallback to the akka.client…
flinkbot edited a comment on pull request #12179: URL: https://github.com/apache/flink/pull/12179#issuecomment-629283467 ## CI report: * beb5343f2d9e91881e3c02cd0ef19230f22e21a9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1496) * 439f5bb5f125322835886d5f9e12cb07a5625fcb UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] pnowojski commented on a change in pull request #12243: [FLINK-17805][networ] Fix ArrayIndexOutOfBound for rotated input gate indexes
pnowojski commented on a change in pull request #12243: URL: https://github.com/apache/flink/pull/12243#discussion_r428017856 ## File path: flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/io/InputProcessorUtilTest.java ## @@ -58,4 +79,57 @@ public void testGenerateInputGateToChannelIndexOffsetMap() { assertEquals(0, inputGateToChannelIndexOffsetMap.get(ig1).intValue()); assertEquals(3, inputGateToChannelIndexOffsetMap.get(ig2).intValue()); } + + @Test + public void testCreateCheckpointedMultipleInputGate() throws Exception { + try (CloseableRegistry registry = new CloseableRegistry()) { + MockEnvironment environment = new MockEnvironmentBuilder().build(); + MockStreamTask streamTask = new MockStreamTaskBuilder(environment).build(); + StreamConfig streamConfig = new StreamConfig(environment.getJobConfiguration()); + streamConfig.setCheckpointMode(CheckpointingMode.EXACTLY_ONCE); + streamConfig.setUnalignedCheckpointsEnabled(true); + + // First input gate has index larger than the second + Collection[] inputGates = new Collection[] { + Collections.singletonList(new MockIndexedInputGate(1, 4)), + Collections.singletonList(new MockIndexedInputGate(0, 2)), + }; + + new MockChannelStateWriter() { Review comment: ops, that's a left over of some previous version. ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/InputProcessorUtil.java ## @@ -79,11 +80,26 @@ public static CheckpointedInputGate createCheckpointedInputGate( unionedInputGates[i] = InputGateUtil.createInputGate(inputGates[i].toArray(new IndexedInputGate[0])); } + IntStream numberOfInputChannelsPerGate = + Arrays + .stream(inputGates) + .flatMap(collection -> collection.stream()) + .sorted(Comparator.comparingInt(IndexedInputGate::getGateIndex)) + .mapToInt(InputGate::getNumberOfInputChannels); + Map inputGateToChannelIndexOffset = generateInputGateToChannelIndexOffsetMap(unionedInputGates); + // Note that numberOfInputChannelsPerGate and inputGateToChannelIndexOffset have a bit different Review comment: Hmmm, I'm not sure, as what if left input has input gates with indexes `0` and `3`, while the right input has indexes `1`, `2` and `4`? (I'm not sure if that's a valid scenario in the JobGraphGenerator) Left input would have a one instance of `UnionInputGate` over gates 0 and 3, while right input would have another instance with gates 1, 2 and 4. However we sort them, it would be somehow inconsistent? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-17780) Add task name to log statements of ChannelStateWriter
[ https://issues.apache.org/jira/browse/FLINK-17780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski closed FLINK-17780. -- Resolution: Fixed merged commit e7f7c5e into apache:master and as 90ece8c119 to release-1.11 > Add task name to log statements of ChannelStateWriter > - > > Key: FLINK-17780 > URL: https://issues.apache.org/jira/browse/FLINK-17780 > Project: Flink > Issue Type: Improvement > Components: Runtime / Checkpointing >Reporter: Arvid Heise >Assignee: Arvid Heise >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > Currently debugging unaligned checkpoint through logs is difficult as many > relevant log statements cannot be connected to the respective task. > > Add task name to the executor thread and to all method of ChannelStateWriter > (as they can be called from any other thread). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
flinkbot edited a comment on pull request #12263: URL: https://github.com/apache/flink/pull/12263#issuecomment-631274882 ## CI report: * 0e1d9cde275d0717fb9b32f6d1a3aed600c33166 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1933) * 320f0a551c635e98c4aff4af6d853d3cf2681fee UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
flinkbot edited a comment on pull request #12266: URL: https://github.com/apache/flink/pull/12266#issuecomment-631427812 ## CI report: * f33808f833da63c5563b48688053d49dedc46538 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1938) * d3268f7bfdf1dfa2c19dc2b38c80a4ee84a5f26c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1943) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11900: [FLINK-17284][jdbc][postgres] Support serial fields
flinkbot edited a comment on pull request #11900: URL: https://github.com/apache/flink/pull/11900#issuecomment-618914824 ## CI report: * 69bce2717b0279a894aa66d15cd4b9b72cd5a474 UNKNOWN * 17ee20d6efb84cca02a24b032c9504dcf03ff8a1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1872) * 4cf97b2be4447c2d2f94259ad559fefb79a0a727 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1942) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] pnowojski merged pull request #12205: [FLINK-17780][checkpointing] Add task name to log statements of ChannelStateWriter.
pnowojski merged pull request #12205: URL: https://github.com/apache/flink/pull/12205 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12267: [FLINK-17842][network] Fix performance regression in SpanningWrapper#clear
flinkbot commented on pull request #12267: URL: https://github.com/apache/flink/pull/12267#issuecomment-631471721 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 0afb379748084b4aef0fdf51c57e24044dfc31df (Wed May 20 13:24:59 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] pnowojski opened a new pull request #12267: [FLINK-17842][network] Fix performance regression in SpanningWrapper#clear
pnowojski opened a new pull request #12267: URL: https://github.com/apache/flink/pull/12267 For some reason the following commit: 54155744bd [FLINK-17547][task] Use RefCountedFile in SpanningWrapper caused a performance regression in various benchmarks. It's hard to tell why as none of the benchmarks are using spill files (records are too small), so our best guess is that combination of AtomicInteger inside RefCountedFile plus NullPointerException handling messed up with JIT ability to get rid of the memory barrier (from AtomicInteger) on the hot path. ## Verifying this change This change is covered by existing micro benchmarks. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (**yes** / no / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17842) Performance regression on 19.05.2020
[ https://issues.apache.org/jira/browse/FLINK-17842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17842: --- Labels: pull-request-available (was: ) > Performance regression on 19.05.2020 > > > Key: FLINK-17842 > URL: https://issues.apache.org/jira/browse/FLINK-17842 > Project: Flink > Issue Type: Bug > Components: Benchmarks >Affects Versions: 1.11.0 >Reporter: Piotr Nowojski >Assignee: Piotr Nowojski >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > > There is a noticeable performance regression in many benchmarks: > http://codespeed.dak8s.net:8000/timeline/?ben=serializerHeavyString&env=2 > http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.1000,1ms&env=2 > http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.100,100ms&env=2 > http://codespeed.dak8s.net:8000/timeline/?ben=globalWindow&env=2 > that happened on May 19th, probably between 260ef2c and 2f18138 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
flinkbot edited a comment on pull request #12266: URL: https://github.com/apache/flink/pull/12266#issuecomment-631427812 ## CI report: * f33808f833da63c5563b48688053d49dedc46538 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1938) * d3268f7bfdf1dfa2c19dc2b38c80a4ee84a5f26c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog
flinkbot edited a comment on pull request #12260: URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314 ## CI report: * 87d0b478bf38fc74639f8ac2c065e4e6d2fc2156 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1927) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12254: [FLINK-17802][kafka] Set offset commit only if group id is configured for new Kafka Table source
flinkbot edited a comment on pull request #12254: URL: https://github.com/apache/flink/pull/12254#issuecomment-630911224 ## CI report: * 6dd81680fa2182b19b2770f7338c3810aa1e4106 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1922) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…
flinkbot edited a comment on pull request #12230: URL: https://github.com/apache/flink/pull/12230#issuecomment-630205457 ## CI report: * 458ca449de6bb1007cd3e83f81fe09f973e7f6d3 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1926) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11900: [FLINK-17284][jdbc][postgres] Support serial fields
flinkbot edited a comment on pull request #11900: URL: https://github.com/apache/flink/pull/11900#issuecomment-618914824 ## CI report: * 69bce2717b0279a894aa66d15cd4b9b72cd5a474 UNKNOWN * 17ee20d6efb84cca02a24b032c9504dcf03ff8a1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1872) * 4cf97b2be4447c2d2f94259ad559fefb79a0a727 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17775) Cannot set batch job name when using collect
[ https://issues.apache.org/jira/browse/FLINK-17775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112176#comment-17112176 ] Nikola commented on FLINK-17775: Hi [~aljoscha], that seems to be my bad. I can indeed remove the last {{env.execute()}} and my job will work just fine. I am using flink in docker which we start through {{bin/taskmanager.sh}} and {{bin/jobmanager.sh}} However, regarding the issue it seems there is no way around it at the moment as the code you point to does not take anyhow a job name in consideration. On the other hand, When I am using both .collect() and env.execute() you said my job will run twice. However, I cannot see my job running twice (or 2 jobs running). I can see only one. > Cannot set batch job name when using collect > > > Key: FLINK-17775 > URL: https://issues.apache.org/jira/browse/FLINK-17775 > Project: Flink > Issue Type: Bug > Components: Runtime / Configuration >Affects Versions: 1.8.3, 1.9.3, 1.10.1 >Reporter: Nikola >Priority: Critical > > We have a batch job in the likes of this: > > {code:java} > ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); > DataSet dataSet = getDataSet(); > dataSet > .sortPartition(MyRow::getCount, Order.DESCENDING) > .setParallelism(1) > .flatMap(new MyFlatMap()) > .collect(); > env.execute("Job at " + Instant.now().toString()); > {code} > However, the job name in the flink UI is not "Job at " but the default > as if I didn't put anything. > > Is there way to have my own flink job name? > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] twalthr commented on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
twalthr commented on pull request #12263: URL: https://github.com/apache/flink/pull/12263#issuecomment-631460366 Thanks @tzulitai. I addressed the remaining comments and will merge this once the build gives green light. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] fpompermaier commented on a change in pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints
fpompermaier commented on a change in pull request #11906: URL: https://github.com/apache/flink/pull/11906#discussion_r427972573 ## File path: flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/catalog/AbstractJdbcCatalog.java ## @@ -126,31 +124,33 @@ public String getBaseUrl() { // -- retrieve PK constraint -- - protected UniqueConstraint getPrimaryKey(DatabaseMetaData metaData, String schema, String table) throws SQLException { + protected Optional getPrimaryKey(DatabaseMetaData metaData, String schema, String table) throws SQLException { // According to the Javadoc of java.sql.DatabaseMetaData#getPrimaryKeys, // the returned primary key columns are ordered by COLUMN_NAME, not by KEY_SEQ. // We need to sort them based on the KEY_SEQ value. ResultSet rs = metaData.getPrimaryKeys(null, schema, table); - List> columnsWithIndex = null; + Map keySeqColumnName = new HashMap<>(); String pkName = null; - while (rs.next()) { + while (rs.next()) { String columnName = rs.getString("COLUMN_NAME"); - pkName = rs.getString("PK_NAME"); + pkName = rs.getString("PK_NAME"); // all the PK_NAME should be the same int keySeq = rs.getInt("KEY_SEQ"); - if (columnsWithIndex == null) { - columnsWithIndex = new ArrayList<>(); - } - columnsWithIndex.add(new AbstractMap.SimpleEntry<>(Integer.valueOf(keySeq), columnName)); + keySeqColumnName.put(keySeq - 1, columnName); // KEY_SEQ is 1-based index } - if (columnsWithIndex != null) { - // sort columns by KEY_SEQ - columnsWithIndex.sort(Comparator.comparingInt(Map.Entry::getKey)); - List cols = columnsWithIndex.stream().map(Map.Entry::getValue).collect(Collectors.toList()); - return UniqueConstraint.primaryKey(pkName, cols); + List pkFields = Arrays.asList(new String[keySeqColumnName.size()]); // initialize size + keySeqColumnName.forEach(pkFields::set); Review comment: very neat and brilliant pattern..I learned something new :+1: This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-15947) Finish moving scala expression DSL to flink-table-api-scala
[ https://issues.apache.org/jira/browse/FLINK-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Wysakowicz closed FLINK-15947. Fix Version/s: (was: 1.12.0) 1.11.0 Release Note: Due to various issues with packages {{org.apache.flink.table.api.scala/java}} all classes from those packages were relocated. Moreover the scala expressions were moved to org.apache.flink.table.api as anounced in Flink 1.9. If you used one of * {{org.apache.flink.table.api.java.StreamTableEnvironment}} * {{org.apache.flink.table.api.scala.StreamTableEnvironment}} * {{org.apache.flink.table.api.java.BatchTableEnvironment}} * {{org.apache.flink.table.api.scala.BatchTableEnvironment}} and you do not convert to/from DataStream switch to: * {{org.apache.flink.table.api.TableEnvironment}} If you do convert to/from DataStream/DataSet change your imports to one of: * {{org.apache.flink.table.api.bridge.java.StreamTableEnvironment}} * {{org.apache.flink.table.api.bridge.scala.StreamTableEnvironment}} * {{org.apache.flink.table.api.bridge.java.BatchTableEnvironment}} * {{org.apache.flink.table.api.bridge.scala.BatchTableEnvironment}} For the Scala expressions use the import: {{org.apache.flink.table.api._}} instead of {{org.apache.flink.table.api.bridge.scala._}} additionally if you use Scala's implicit conversions to/from DataStream/DataSet import {{org.apache.flink.table.api.bridge.scala._}} instead of {{org.apache.flink.table.api.scala._}} Resolution: Fixed Implemented in master: 5f0183fe79d10ac36101f60f2589062a39630f96 4e56ca11fb275c72f4a70f8dd12ff71dc12983d3 1.11: 194b85b42749b03c5f1e79b5ae4377ab7230df36 87a0358deb51cf55f455d0dd4cfd6bf8690b2e2e > Finish moving scala expression DSL to flink-table-api-scala > --- > > Key: FLINK-15947 > URL: https://issues.apache.org/jira/browse/FLINK-15947 > Project: Flink > Issue Type: Improvement > Components: Table SQL / API >Reporter: Dawid Wysakowicz >Assignee: Dawid Wysakowicz >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > FLINK-13045 performed the first step of moving implicit conversions to a long > term package object. It also added release notes so that users have time to > adapt to the changes. > Now that it's two releases since that time, we can finish moving all the > intended conversions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] tzulitai commented on a change in pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
tzulitai commented on a change in pull request #12263: URL: https://github.com/apache/flink/pull/12263#discussion_r427970574 ## File path: flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/RowSerializer.java ## @@ -367,21 +367,21 @@ public int getVersion() { /** * A {@link TypeSerializerSnapshot} for RowSerializer. */ - // TODO not fully functional yet due to FLINK-17520 public static final class RowSerializerSnapshot extends CompositeTypeSerializerSnapshot { private static final int VERSION = 3; - private static final int VERSION_WITHOUT_ROW_KIND = 2; + private static final int LAST_VERSION_WITHOUT_ROW_KIND = 2; - private boolean legacyModeEnabled = false; + private int readVersion = VERSION; public RowSerializerSnapshot() { super(RowSerializer.class); } RowSerializerSnapshot(RowSerializer serializerInstance) { super(serializerInstance); + this.readVersion = serializerInstance.legacyModeEnabled ? LAST_VERSION_WITHOUT_ROW_KIND : VERSION; Review comment: I don't think this line is needed, unless I'm missing something in the tests. The read version should only ever be changed if this snapshot was created by restoring from a snapshot. In this case, this constructor is only ever used to create a new snapshot when checkpointing occurs - the read version should be the default value (`VALUE`). ## File path: flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/RowSerializer.java ## @@ -60,7 +60,7 @@ public static final int ROW_KIND_OFFSET = 2; - private static final long serialVersionUID = 2L; + private static final long serialVersionUID = 1L; // legacy, don't touch Review comment: nit: maybe add a comment that this can only be touched after support for 1.9 savepoints is ditched. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17828) AggregateReduceGroupingITCase fails on azure
[ https://issues.apache.org/jira/browse/FLINK-17828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112135#comment-17112135 ] Jark Wu commented on FLINK-17828: - Hi [~chesnay], there are 5 lines... I guess the mismatch is hided in the 5 lines. > AggregateReduceGroupingITCase fails on azure > > > Key: FLINK-17828 > URL: https://issues.apache.org/jira/browse/FLINK-17828 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.12.0 >Reporter: Dawid Wysakowicz >Priority: Blocker > Labels: test-stability > > failure: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1906&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=f83cd372-208c-5ec4-12a8-337462457129 > {code} > 2020-05-20T05:45:19.9368056Z [ERROR] Tests run: 16, Failures: 1, Errors: 0, > Skipped: 0, Time elapsed: 70.635 s <<< FAILURE! - in > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase > 2020-05-20T05:45:19.9400043Z [ERROR] > testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase) > Time elapsed: 17.264 s <<< FAILURE! > 2020-05-20T05:45:19.9401582Z java.lang.AssertionError: > 2020-05-20T05:45:19.9402509Z > 2020-05-20T05:45:19.9402933Z Results do not match for query: > 2020-05-20T05:45:19.9403278Z SELECT a6, b6, max(c6), count(d6), sum(e6) > FROM T6 GROUP BY a6, b6 > 2020-05-20T05:45:19.9407322Z > 2020-05-20T05:45:19.9407851Z Results > 2020-05-20T05:45:19.9408713Z == Correct Result - 5 == == Actual Result > - 5 == > 2020-05-20T05:45:19.9409059Z 0,1,null,1,10 0,1,null,1,10 > 2020-05-20T05:45:19.9409717Z 1,1,Hello1,1,101,1,Hello1,1,10 > 2020-05-20T05:45:19.9410018Z 10,1,Hello10,1,10 10,1,Hello10,1,10 > 2020-05-20T05:45:19.9410296Z 100,1,Hello100,1,10 > 100,1,Hello100,1,10 > 2020-05-20T05:45:19.9410596Z 1000,1,null,1,10 1000,1,null,1,10 > 2020-05-20T05:45:19.9410868Z 1,1,null,1,10 1,1,null,1,10 > 2020-05-20T05:45:19.9411184Z 10001,1,Hello10001,1,10 > 10001,1,Hello10001,1,10 > 2020-05-20T05:45:19.9411479Z 10002,1,Hello10002,1,10 > 10002,1,Hello10002,1,10 > 2020-05-20T05:45:19.9411786Z 10003,1,Hello10003,1,10 > 10003,1,Hello10003,1,10 > 2020-05-20T05:45:19.9412092Z 10004,1,Hello10004,1,10 > 10004,1,Hello10004,1,10 > 2020-05-20T05:45:19.9412379Z 10005,1,Hello10005,1,10 > 10005,1,Hello10005,1,10 > 2020-05-20T05:45:19.9412941Z 10006,1,Hello10006,1,10 > 10006,1,Hello10006,1,10 > 2020-05-20T05:45:19.9413241Z 10007,1,Hello10007,1,10 > 10007,1,Hello10007,1,10 > 2020-05-20T05:45:19.9413555Z 10008,1,Hello10008,1,10 > 10008,1,Hello10008,1,10 > 2020-05-20T05:45:19.9413977Z 10009,1,Hello10009,1,10 > 10009,1,Hello10009,1,10 > 2020-05-20T05:45:19.9414377Z 1001,1,Hello1001,1,10 > 1001,1,Hello1001,1,10 > 2020-05-20T05:45:19.9414686Z 10010,1,Hello10010,1,10 > 10010,1,Hello10010,1,10 > 2020-05-20T05:45:19.9415462Z 10011,1,Hello10011,1,10 > 10011,1,Hello10011,1,10 > 2020-05-20T05:45:19.9415783Z 10012,1,Hello10012,1,10 > 10012,1,Hello10012,1,10 > 2020-05-20T05:45:19.9416081Z 10013,1,Hello10013,1,10 > 10013,1,Hello10013,1,10 > 2020-05-20T05:45:19.9416926Z 10014,1,Hello10014,1,10 > 10014,1,Hello10014,1,10 > 2020-05-20T05:45:19.9417349Z 10015,1,Hello10015,1,10 > 10015,1,Hello10015,1,10 > 2020-05-20T05:45:19.9417664Z 10016,1,Hello10016,1,10 > 10016,1,Hello10016,1,10 > 2020-05-20T05:45:19.9418011Z 10017,1,Hello10017,1,10 > 10017,1,Hello10017,1,10 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] fpompermaier commented on pull request #11900: [FLINK-17284][jdbc][postgres] Support serial fields
fpompermaier commented on pull request #11900: URL: https://github.com/apache/flink/pull/11900#issuecomment-631442974 Updated PR to resolve conflicts This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17828) AggregateReduceGroupingITCase fails on azure
[ https://issues.apache.org/jira/browse/FLINK-17828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112129#comment-17112129 ] Chesnay Schepler commented on FLINK-17828: -- Am I blind or is there no visible difference between the two? This isn't just some encoding/newline/whitespace issue again is it? > AggregateReduceGroupingITCase fails on azure > > > Key: FLINK-17828 > URL: https://issues.apache.org/jira/browse/FLINK-17828 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.12.0 >Reporter: Dawid Wysakowicz >Priority: Blocker > Labels: test-stability > > failure: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1906&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=f83cd372-208c-5ec4-12a8-337462457129 > {code} > 2020-05-20T05:45:19.9368056Z [ERROR] Tests run: 16, Failures: 1, Errors: 0, > Skipped: 0, Time elapsed: 70.635 s <<< FAILURE! - in > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase > 2020-05-20T05:45:19.9400043Z [ERROR] > testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase) > Time elapsed: 17.264 s <<< FAILURE! > 2020-05-20T05:45:19.9401582Z java.lang.AssertionError: > 2020-05-20T05:45:19.9402509Z > 2020-05-20T05:45:19.9402933Z Results do not match for query: > 2020-05-20T05:45:19.9403278Z SELECT a6, b6, max(c6), count(d6), sum(e6) > FROM T6 GROUP BY a6, b6 > 2020-05-20T05:45:19.9407322Z > 2020-05-20T05:45:19.9407851Z Results > 2020-05-20T05:45:19.9408713Z == Correct Result - 5 == == Actual Result > - 5 == > 2020-05-20T05:45:19.9409059Z 0,1,null,1,10 0,1,null,1,10 > 2020-05-20T05:45:19.9409717Z 1,1,Hello1,1,101,1,Hello1,1,10 > 2020-05-20T05:45:19.9410018Z 10,1,Hello10,1,10 10,1,Hello10,1,10 > 2020-05-20T05:45:19.9410296Z 100,1,Hello100,1,10 > 100,1,Hello100,1,10 > 2020-05-20T05:45:19.9410596Z 1000,1,null,1,10 1000,1,null,1,10 > 2020-05-20T05:45:19.9410868Z 1,1,null,1,10 1,1,null,1,10 > 2020-05-20T05:45:19.9411184Z 10001,1,Hello10001,1,10 > 10001,1,Hello10001,1,10 > 2020-05-20T05:45:19.9411479Z 10002,1,Hello10002,1,10 > 10002,1,Hello10002,1,10 > 2020-05-20T05:45:19.9411786Z 10003,1,Hello10003,1,10 > 10003,1,Hello10003,1,10 > 2020-05-20T05:45:19.9412092Z 10004,1,Hello10004,1,10 > 10004,1,Hello10004,1,10 > 2020-05-20T05:45:19.9412379Z 10005,1,Hello10005,1,10 > 10005,1,Hello10005,1,10 > 2020-05-20T05:45:19.9412941Z 10006,1,Hello10006,1,10 > 10006,1,Hello10006,1,10 > 2020-05-20T05:45:19.9413241Z 10007,1,Hello10007,1,10 > 10007,1,Hello10007,1,10 > 2020-05-20T05:45:19.9413555Z 10008,1,Hello10008,1,10 > 10008,1,Hello10008,1,10 > 2020-05-20T05:45:19.9413977Z 10009,1,Hello10009,1,10 > 10009,1,Hello10009,1,10 > 2020-05-20T05:45:19.9414377Z 1001,1,Hello1001,1,10 > 1001,1,Hello1001,1,10 > 2020-05-20T05:45:19.9414686Z 10010,1,Hello10010,1,10 > 10010,1,Hello10010,1,10 > 2020-05-20T05:45:19.9415462Z 10011,1,Hello10011,1,10 > 10011,1,Hello10011,1,10 > 2020-05-20T05:45:19.9415783Z 10012,1,Hello10012,1,10 > 10012,1,Hello10012,1,10 > 2020-05-20T05:45:19.9416081Z 10013,1,Hello10013,1,10 > 10013,1,Hello10013,1,10 > 2020-05-20T05:45:19.9416926Z 10014,1,Hello10014,1,10 > 10014,1,Hello10014,1,10 > 2020-05-20T05:45:19.9417349Z 10015,1,Hello10015,1,10 > 10015,1,Hello10015,1,10 > 2020-05-20T05:45:19.9417664Z 10016,1,Hello10016,1,10 > 10016,1,Hello10016,1,10 > 2020-05-20T05:45:19.9418011Z 10017,1,Hello10017,1,10 > 10017,1,Hello10017,1,10 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] dawidwys closed pull request #12232: [FLINK-15947] Finish moving scala expression DSL to flink-table-api-scala
dawidwys closed pull request #12232: URL: https://github.com/apache/flink/pull/12232 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
flinkbot edited a comment on pull request #12263: URL: https://github.com/apache/flink/pull/12263#issuecomment-631274882 ## CI report: * 0e1d9cde275d0717fb9b32f6d1a3aed600c33166 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1933) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
flinkbot edited a comment on pull request #12266: URL: https://github.com/apache/flink/pull/12266#issuecomment-631427812 ## CI report: * f33808f833da63c5563b48688053d49dedc46538 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1938) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12232: [FLINK-15947] Finish moving scala expression DSL to flink-table-api-scala
flinkbot edited a comment on pull request #12232: URL: https://github.com/apache/flink/pull/12232#issuecomment-630355116 ## CI report: * bde94ff2e28c3b8d1b9e2b25c38afa24f8a558fd UNKNOWN * cffb27bb10c6d5da974483fbe8a32e562a0484e8 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1936) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12264: [FLINK-17558][netty] Release partitions asynchronously
flinkbot edited a comment on pull request #12264: URL: https://github.com/apache/flink/pull/12264#issuecomment-631349883 ## CI report: * 19c5f57b94cc56b70002031618c32d9e6f68effb UNKNOWN * 9dbaf3094c0942b96a01060aba9d4ffbad9d1857 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1934) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12096: [FLINK-16074][docs-zh] Translate the Overview page for State & Fault Tolerance into Chinese
flinkbot edited a comment on pull request #12096: URL: https://github.com/apache/flink/pull/12096#issuecomment-627237312 ## CI report: * d5ca90e68a87b35c5969ef79b099164d850381ff Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1918) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17828) AggregateReduceGroupingITCase fails on azure
[ https://issues.apache.org/jira/browse/FLINK-17828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112123#comment-17112123 ] Jark Wu commented on FLINK-17828: - cc [~godfreyhe]. I can' reproduce this error in my local. > AggregateReduceGroupingITCase fails on azure > > > Key: FLINK-17828 > URL: https://issues.apache.org/jira/browse/FLINK-17828 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.12.0 >Reporter: Dawid Wysakowicz >Priority: Blocker > Labels: test-stability > > failure: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1906&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=f83cd372-208c-5ec4-12a8-337462457129 > {code} > 2020-05-20T05:45:19.9368056Z [ERROR] Tests run: 16, Failures: 1, Errors: 0, > Skipped: 0, Time elapsed: 70.635 s <<< FAILURE! - in > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase > 2020-05-20T05:45:19.9400043Z [ERROR] > testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase) > Time elapsed: 17.264 s <<< FAILURE! > 2020-05-20T05:45:19.9401582Z java.lang.AssertionError: > 2020-05-20T05:45:19.9402509Z > 2020-05-20T05:45:19.9402933Z Results do not match for query: > 2020-05-20T05:45:19.9403278Z SELECT a6, b6, max(c6), count(d6), sum(e6) > FROM T6 GROUP BY a6, b6 > 2020-05-20T05:45:19.9407322Z > 2020-05-20T05:45:19.9407851Z Results > 2020-05-20T05:45:19.9408713Z == Correct Result - 5 == == Actual Result > - 5 == > 2020-05-20T05:45:19.9409059Z 0,1,null,1,10 0,1,null,1,10 > 2020-05-20T05:45:19.9409717Z 1,1,Hello1,1,101,1,Hello1,1,10 > 2020-05-20T05:45:19.9410018Z 10,1,Hello10,1,10 10,1,Hello10,1,10 > 2020-05-20T05:45:19.9410296Z 100,1,Hello100,1,10 > 100,1,Hello100,1,10 > 2020-05-20T05:45:19.9410596Z 1000,1,null,1,10 1000,1,null,1,10 > 2020-05-20T05:45:19.9410868Z 1,1,null,1,10 1,1,null,1,10 > 2020-05-20T05:45:19.9411184Z 10001,1,Hello10001,1,10 > 10001,1,Hello10001,1,10 > 2020-05-20T05:45:19.9411479Z 10002,1,Hello10002,1,10 > 10002,1,Hello10002,1,10 > 2020-05-20T05:45:19.9411786Z 10003,1,Hello10003,1,10 > 10003,1,Hello10003,1,10 > 2020-05-20T05:45:19.9412092Z 10004,1,Hello10004,1,10 > 10004,1,Hello10004,1,10 > 2020-05-20T05:45:19.9412379Z 10005,1,Hello10005,1,10 > 10005,1,Hello10005,1,10 > 2020-05-20T05:45:19.9412941Z 10006,1,Hello10006,1,10 > 10006,1,Hello10006,1,10 > 2020-05-20T05:45:19.9413241Z 10007,1,Hello10007,1,10 > 10007,1,Hello10007,1,10 > 2020-05-20T05:45:19.9413555Z 10008,1,Hello10008,1,10 > 10008,1,Hello10008,1,10 > 2020-05-20T05:45:19.9413977Z 10009,1,Hello10009,1,10 > 10009,1,Hello10009,1,10 > 2020-05-20T05:45:19.9414377Z 1001,1,Hello1001,1,10 > 1001,1,Hello1001,1,10 > 2020-05-20T05:45:19.9414686Z 10010,1,Hello10010,1,10 > 10010,1,Hello10010,1,10 > 2020-05-20T05:45:19.9415462Z 10011,1,Hello10011,1,10 > 10011,1,Hello10011,1,10 > 2020-05-20T05:45:19.9415783Z 10012,1,Hello10012,1,10 > 10012,1,Hello10012,1,10 > 2020-05-20T05:45:19.9416081Z 10013,1,Hello10013,1,10 > 10013,1,Hello10013,1,10 > 2020-05-20T05:45:19.9416926Z 10014,1,Hello10014,1,10 > 10014,1,Hello10014,1,10 > 2020-05-20T05:45:19.9417349Z 10015,1,Hello10015,1,10 > 10015,1,Hello10015,1,10 > 2020-05-20T05:45:19.9417664Z 10016,1,Hello10016,1,10 > 10016,1,Hello10016,1,10 > 2020-05-20T05:45:19.9418011Z 10017,1,Hello10017,1,10 > 10017,1,Hello10017,1,10 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
flinkbot commented on pull request #12266: URL: https://github.com/apache/flink/pull/12266#issuecomment-631427812 ## CI report: * f33808f833da63c5563b48688053d49dedc46538 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints
flinkbot edited a comment on pull request #11906: URL: https://github.com/apache/flink/pull/11906#issuecomment-619214462 ## CI report: * 2e339ca93fcf4461ddb3502b49ab34083fc96cf6 UNKNOWN * 66afd5253c17fae0a41bc38f41338a69268ca4ff Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1917) * 1310d3ed1bad9e2356a320128cac125e930831dc Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1932) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-17622) Remove useless switch for decimal in PostresCatalog
[ https://issues.apache.org/jira/browse/FLINK-17622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-17622. --- Fix Version/s: 1.11.0 Resolution: Fixed master (1.12.0): d7fc0d0620eae583ac71352e884e38affcc9f9e9 1.11.0: 6097d97a39877758d2729242186a19d86220e6ea > Remove useless switch for decimal in PostresCatalog > --- > > Key: FLINK-17622 > URL: https://issues.apache.org/jira/browse/FLINK-17622 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC >Reporter: Flavio Pompermaier >Assignee: Flavio Pompermaier >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > Remove the useless switch for decimal fields. The Postgres JDBC connector > translate them to numeric -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong merged pull request #12090: [FLINK-17622][connectors / jdbc] Remove useless switch for decimal in PostgresCatalog
wuchong merged pull request #12090: URL: https://github.com/apache/flink/pull/12090 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on pull request #12090: [FLINK-17622][connectors / jdbc] Remove useless switch for decimal in PostgresCatalog
wuchong commented on pull request #12090: URL: https://github.com/apache/flink/pull/12090#issuecomment-631426574 Passed. Merging... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
flinkbot commented on pull request #12266: URL: https://github.com/apache/flink/pull/12266#issuecomment-631422615 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit f33808f833da63c5563b48688053d49dedc46538 (Wed May 20 11:43:29 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-statefun] tzulitai opened a new pull request #115: [FLINK-17518] [e2e] Add remote module E2E
tzulitai opened a new pull request #115: URL: https://github.com/apache/flink-statefun/pull/115 This PR adds an E2E test that consists of a complete YAML-based remote module, with: - YAML auto-routable Kafka ingress - YAML generic Kafka egress - Remote functions using the Python SDK Since the coverage completely subsumes the routable Kafka E2E, the routable Kafka E2E test is removed in favor of this new E2E. Please see the class-level docs of `RemoteModuleE2E` for details of the test scenario. ## Brief change log - a8b4b7b Preliminary change to the Python SDK build script, to allow it to be run with Maven - 470dc52 Make the `-Prun-e2e-tests` build profile also build the Python SDK. This is required because the remote module E2E test requires the Python SDK wheels built. - ce76abd The test scenario of the new `RemoteModuleE2E` will result in output records to Kafka being written with indeterministic order. This commit adds a matcher to the `KafkaIOVerifier` that matches the consumed outputs with any order. - 31b616d The Python remote functions - 82e93b7 The actual E2E test implementation - 95a1366 Removes the routable Kafka E2E ## Verifying Travis should pass, or locally run `mvn clean install -Prun-e2e-tests` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17518) Add HTTP-based request reply protocol E2E test for Stateful Functions
[ https://issues.apache.org/jira/browse/FLINK-17518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17518: --- Labels: pull-request-available (was: ) > Add HTTP-based request reply protocol E2E test for Stateful Functions > - > > Key: FLINK-17518 > URL: https://issues.apache.org/jira/browse/FLINK-17518 > Project: Flink > Issue Type: Sub-task > Components: Stateful Functions >Reporter: Tzu-Li (Gordon) Tai >Assignee: Tzu-Li (Gordon) Tai >Priority: Blocker > Labels: pull-request-available > Fix For: statefun-2.0.1, statefun-2.1.0 > > > The E2E test should contain of a standalone deployed containerized remote > function, e.g. using the Python SDK + Flask, as well as a Flink Stateful > Functions cluster deployed using the {{StatefulFunctionsAppsContainers}} > utility. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17843) Check for RowKind when converting Row to expression
[ https://issues.apache.org/jira/browse/FLINK-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17843: --- Labels: pull-request-available (was: ) > Check for RowKind when converting Row to expression > --- > > Key: FLINK-17843 > URL: https://issues.apache.org/jira/browse/FLINK-17843 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / API >Affects Versions: 1.11.0 >Reporter: Dawid Wysakowicz >Assignee: Dawid Wysakowicz >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > A row ctor does not allow for a rowKind thus we should check if the rowKind > is set when converting from {{Row}} to expression. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] dawidwys opened a new pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
dawidwys opened a new pull request #12266: URL: https://github.com/apache/flink/pull/12266 ## What is the purpose of the change Row constructor expression does not support a RowKind flag. It is possible to create only constant expressions of an INSERT row. This PR adds a check when converting a Row to an expression for the `RowKind`. ## Verifying this change Added tests in `org.apache.flink.table.expressions.ObjectToExpressionTest` ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-17518) Add HTTP-based request reply protocol E2E test for Stateful Functions
[ https://issues.apache.org/jira/browse/FLINK-17518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tzu-Li (Gordon) Tai reassigned FLINK-17518: --- Assignee: Tzu-Li (Gordon) Tai > Add HTTP-based request reply protocol E2E test for Stateful Functions > - > > Key: FLINK-17518 > URL: https://issues.apache.org/jira/browse/FLINK-17518 > Project: Flink > Issue Type: Sub-task > Components: Stateful Functions >Reporter: Tzu-Li (Gordon) Tai >Assignee: Tzu-Li (Gordon) Tai >Priority: Blocker > Fix For: statefun-2.0.1, statefun-2.1.0 > > > The E2E test should contain of a standalone deployed containerized remote > function, e.g. using the Python SDK + Flask, as well as a Flink Stateful > Functions cluster deployed using the {{StatefulFunctionsAppsContainers}} > utility. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12232: [FLINK-15947] Finish moving scala expression DSL to flink-table-api-scala
flinkbot edited a comment on pull request #12232: URL: https://github.com/apache/flink/pull/12232#issuecomment-630355116 ## CI report: * efc125913ce29720089ebc8ef13131da3c2fab8a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1931) * bde94ff2e28c3b8d1b9e2b25c38afa24f8a558fd UNKNOWN * cffb27bb10c6d5da974483fbe8a32e562a0484e8 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1936) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org