[jira] [Closed] (FLINK-17356) Pass table's primary key to catalog table in PostgresCatalog
[ https://issues.apache.org/jira/browse/FLINK-17356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-17356. --- Fix Version/s: 1.11.0 Resolution: Fixed Add IT cases for inserting group by query into posgres catalog table - master (1.12.0): fa3768a82fd880178f5c8cb71c28510dd4db4d30 - 1.11.0: a83ee6c90b605f0807a40c82f2f5879f80f1f2dd Support PK and Unique constraints - master (1.12.0): 38ada4ad5ece2d28707e9403278133d8e5790ec0 - 1.11.0: b37626bd2a43f9a39a954ef63a924da23a2c3825 > Pass table's primary key to catalog table in PostgresCatalog > > > Key: FLINK-17356 > URL: https://issues.apache.org/jira/browse/FLINK-17356 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC, Table SQL / Ecosystem >Reporter: Flavio Pompermaier >Assignee: Flavio Pompermaier >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > At the moment the PostgresCatalog does not create field constraints (at the > moment there's only UNIQUE and PRIMARY_KEY in the TableSchema..could it > worth to add also NOT_NULL?) > We only pass primary key to catalog table for now. UNIQUE and NOT NULL > information will be future work. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] Jiayi-Liao commented on a change in pull request #12243: [FLINK-17805][networ] Fix ArrayIndexOutOfBound for rotated input gate indexes
Jiayi-Liao commented on a change in pull request #12243: URL: https://github.com/apache/flink/pull/12243#discussion_r428063945 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/InputProcessorUtil.java ## @@ -79,11 +80,26 @@ public static CheckpointedInputGate createCheckpointedInputGate( unionedInputGates[i] = InputGateUtil.createInputGate(inputGates[i].toArray(new IndexedInputGate[0])); } + IntStream numberOfInputChannelsPerGate = + Arrays + .stream(inputGates) + .flatMap(collection -> collection.stream()) + .sorted(Comparator.comparingInt(IndexedInputGate::getGateIndex)) + .mapToInt(InputGate::getNumberOfInputChannels); + Map inputGateToChannelIndexOffset = generateInputGateToChannelIndexOffsetMap(unionedInputGates); + // Note that numberOfInputChannelsPerGate and inputGateToChannelIndexOffset have a bit different Review comment: You're right. I didn't notice that `inputGateToChannelIndexOffset`'s key is an unioned InputGate. Thanks for pointing this out. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12244: [FLINK-17258][network] Fix couple of ITCases that were failing with enabled unaligned checkpoints
flinkbot edited a comment on pull request #12244: URL: https://github.com/apache/flink/pull/12244#issuecomment-630723509 ## CI report: * 3dcc9233af810b8be408665c0083fab404a2dea5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1833) * 7fa2068a283b9471384248c1bf301e3d406b5f48 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1951) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12268: [FLINK-17375] Refactor travis_watchdog.sh into separate ci and azure scripts.
flinkbot commented on pull request #12268: URL: https://github.com/apache/flink/pull/12268#issuecomment-631512695 ## CI report: * 4ed6888375869e654816264124703e72439c6148 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12243: [FLINK-17805][networ] Fix ArrayIndexOutOfBound for rotated input gate indexes
flinkbot edited a comment on pull request #12243: URL: https://github.com/apache/flink/pull/12243#issuecomment-630723410 ## CI report: * a3be362324a56a5f9b118a09ea3552a3039acffe Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1950) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17356) Pass table's primary key to catalog table in PostgresCatalog
[ https://issues.apache.org/jira/browse/FLINK-17356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-17356: Description: At the moment the PostgresCatalog does not create field constraints (at the moment there's only UNIQUE and PRIMARY_KEY in the TableSchema..could it worth to add also NOT_NULL?) We only pass primary key to catalog table for now. UNIQUE and NOT NULL information will be future work. was:At the moment the PostgresCatalog does not create field constraints (at the moment there's only UNIQUE and PRIMARY_KEY in the TableSchema..could it worth to add also NOT_NULL?) > Pass table's primary key to catalog table in PostgresCatalog > > > Key: FLINK-17356 > URL: https://issues.apache.org/jira/browse/FLINK-17356 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC, Table SQL / Ecosystem >Reporter: Flavio Pompermaier >Assignee: Flavio Pompermaier >Priority: Major > Labels: pull-request-available > > At the moment the PostgresCatalog does not create field constraints (at the > moment there's only UNIQUE and PRIMARY_KEY in the TableSchema..could it > worth to add also NOT_NULL?) > We only pass primary key to catalog table for now. UNIQUE and NOT NULL > information will be future work. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17356) Pass table's primary key to catalog table in PostgresCatalog
[ https://issues.apache.org/jira/browse/FLINK-17356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-17356: Summary: Pass table's primary key to catalog table in PostgresCatalog (was: Properly set constraints (PK and UNIQUE)) > Pass table's primary key to catalog table in PostgresCatalog > > > Key: FLINK-17356 > URL: https://issues.apache.org/jira/browse/FLINK-17356 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC, Table SQL / Ecosystem >Reporter: Flavio Pompermaier >Assignee: Flavio Pompermaier >Priority: Major > Labels: pull-request-available > > At the moment the PostgresCatalog does not create field constraints (at the > moment there's only UNIQUE and PRIMARY_KEY in the TableSchema..could it > worth to add also NOT_NULL?) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-15503) FileUploadHandlerTest.testMixedMultipart and FileUploadHandlerTest. testUploadCleanupOnUnknownAttribute failed on Azure
[ https://issues.apache.org/jira/browse/FLINK-15503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112275#comment-17112275 ] Robert Metzger commented on FLINK-15503: I cancelled my test after 800 successful test runs. The slowest test run was 32.6 seconds. > FileUploadHandlerTest.testMixedMultipart and FileUploadHandlerTest. > testUploadCleanupOnUnknownAttribute failed on Azure > --- > > Key: FLINK-15503 > URL: https://issues.apache.org/jira/browse/FLINK-15503 > Project: Flink > Issue Type: Bug > Components: Runtime / REST, Tests >Affects Versions: 1.10.0 >Reporter: Till Rohrmann >Priority: Critical > Labels: test-stability > Fix For: 1.10.0 > > > The tests {{FileUploadHandlerTest.testMixedMultipart}} and > {{FileUploadHandlerTest. testUploadCleanupOnUnknownAttribute}} failed on > Azure with > {code} > 2020-01-07T09:32:06.9840445Z [ERROR] > testUploadCleanupOnUnknownAttribute(org.apache.flink.runtime.rest.FileUploadHandlerTest) > Time elapsed: 12.457 s <<< ERROR! > 2020-01-07T09:32:06.9850865Z java.net.SocketTimeoutException: timeout > 2020-01-07T09:32:06.9851650Z at > org.apache.flink.runtime.rest.FileUploadHandlerTest.testUploadCleanupOnUnknownAttribute(FileUploadHandlerTest.java:234) > 2020-01-07T09:32:06.9852910Z Caused by: java.net.SocketException: Socket > closed > 2020-01-07T09:32:06.9853465Z at > org.apache.flink.runtime.rest.FileUploadHandlerTest.testUploadCleanupOnUnknownAttribute(FileUploadHandlerTest.java:234) > 2020-01-07T09:32:06.9853855Z > 2020-01-07T09:32:06.9854362Z [ERROR] > testMixedMultipart(org.apache.flink.runtime.rest.FileUploadHandlerTest) Time > elapsed: 10.091 s <<< ERROR! > 2020-01-07T09:32:06.9855125Z java.net.SocketTimeoutException: Read timed out > 2020-01-07T09:32:06.9855652Z at > org.apache.flink.runtime.rest.FileUploadHandlerTest.testMixedMultipart(FileUploadHandlerTest.java:154) > 2020-01-07T09:32:06.9856034Z > {code} > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=4159=results -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong closed pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints
wuchong closed pull request #11906: URL: https://github.com/apache/flink/pull/11906 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-16922) DecimalData.toUnscaledBytes should be consistent with BigDecimla.unscaledValue.toByteArray
[ https://issues.apache.org/jira/browse/FLINK-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-16922. --- Resolution: Fixed master (1.12.0): 34671add8a435ee4431f4c1c4da37a8e078b7a8a 1.11.0: f7356560145f2bb862d1608264de3cf476f4abba > DecimalData.toUnscaledBytes should be consistent with > BigDecimla.unscaledValue.toByteArray > -- > > Key: FLINK-16922 > URL: https://issues.apache.org/jira/browse/FLINK-16922 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Reporter: Jingsong Lee >Assignee: Jark Wu >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > > In Decimal: > {code:java} > public byte[] toUnscaledBytes() { >if (!isCompact()) { > return toBigDecimal().unscaledValue().toByteArray(); >} >// big endian; consistent with BigInteger.toByteArray() >byte[] bytes = new byte[8]; >long l = longVal; >for (int i = 0; i < 8; i++) { > bytes[7 - i] = (byte) l; > l >>>= 8; >} >return bytes; > } > {code} > When is compact, it will return fix 8 length byte array. > This should not happen, it brings an incompatible byte array. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17750) YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint failed on azure
[ https://issues.apache.org/jira/browse/FLINK-17750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-17750: -- Fix Version/s: 1.11.0 > YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint failed on > azure > --- > > Key: FLINK-17750 > URL: https://issues.apache.org/jira/browse/FLINK-17750 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.11.0 >Reporter: Roman Khachatryan >Priority: Critical > Labels: test-stability > Fix For: 1.11.0 > > > [https://dev.azure.com/khachatryanroman/810e80cc-0656-4d3c-9d8c-186764456a01/_apis/build/builds/6/logs/156] > > {code:java} > 2020-05-15T23:42:29.5307581Z [ERROR] > testKillYarnSessionClusterEntrypoint(org.apache.flink.yarn.YARNHighAvailabilityITCase) > Time elapsed: 21.68 s <<< ERROR! > 2020-05-15T23:42:29.5308406Z java.util.concurrent.ExecutionException: > 2020-05-15T23:42:29.5308864Z > org.apache.flink.runtime.rest.util.RestClientException: [Internal server > error., 2020-05-15T23:42:29.5309678Z java.util.concurrent.TimeoutException: > Invocation of public abstract java.util.concurrent.CompletableFuture > org.apache.flink.runt > ime.dispatcher.DispatcherGateway.requestJob(org.apache.flink.api.common.JobID,org.apache.flink.api.common.time.Time) > timed out. > 2020-05-15T23:42:29.5310322Zat com.sun.proxy.$Proxy33.requestJob(Unknown > Source) > 2020-05-15T23:42:29.5311018Zat > org.apache.flink.runtime.rest.handler.legacy.DefaultExecutionGraphCache.getExecutionGraphInternal(DefaultExecutionGraphCach > e.java:103) > 2020-05-15T23:42:29.5311704Zat > org.apache.flink.runtime.rest.handler.legacy.DefaultExecutionGraphCache.getExecutionGraph(DefaultExecutionGraphCache.java:7 > 1) > 2020-05-15T23:42:29.5312355Zat > org.apache.flink.runtime.rest.handler.job.AbstractExecutionGraphHandler.handleRequest(AbstractExecutionGraphHandler.java:75 > ) > 2020-05-15T23:42:29.5312924Zat > org.apache.flink.runtime.rest.handler.AbstractRestHandler.respondToRequest(AbstractRestHandler.java:73) > 2020-05-15T23:42:29.5313423Zat > org.apache.flink.runtime.rest.handler.AbstractHandler.respondAsLeader(AbstractHandler.java:172) > 2020-05-15T23:42:29.5314497Zat > org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.lambda$channelRead0$0(LeaderRetrievalHandler.java:81) > 2020-05-15T23:42:29.5315083Zat > java.util.Optional.ifPresent(Optional.java:159) > 2020-05-15T23:42:29.5315474Zat > org.apache.flink.util.OptionalConsumer.ifPresent(OptionalConsumer.java:46) > 2020-05-15T23:42:29.5315979Zat > org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:78) > 2020-05-15T23:42:29.5316520Zat > org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:49) > 2020-05-15T23:42:29.5317092Zat > org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:10 > 5) > 2020-05-15T23:42:29.5317705Zat > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerConte > xt.java:374) > 2020-05-15T23:42:29.5318586Zat > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerConte > xt.java:360) > 2020-05-15T23:42:29.5319249Zat > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext > .java:352) > 2020-05-15T23:42:29.5319729Zat > org.apache.flink.runtime.rest.handler.router.RouterHandler.routed(RouterHandler.java:110) > 2020-05-15T23:42:29.5320136Zat > org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:89) > 2020-05-15T23:42:29.5320742Zat > org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:54) > 2020-05-15T23:42:29.5321195Zat > org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:10 > 5) > 2020-05-15T23:42:29.5321730Zat > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerConte > xt.java:374) > 2020-05-15T23:42:29.5322263Zat > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerConte > xt.java:360) > 2020-05-15T23:42:29.5322806Zat > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext > .java:352) > 2020-05-15T23:42:29.5323335Zat >
[GitHub] [flink] wuchong merged pull request #12265: [FLINK-16922][table-common] Fix DecimalData.toUnscaledBytes() should be consistent with BigDecimla.unscaledValue.toByteArray()
wuchong merged pull request #12265: URL: https://github.com/apache/flink/pull/12265 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong commented on pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints
wuchong commented on pull request #11906: URL: https://github.com/apache/flink/pull/11906#issuecomment-631504885 Passed. Merging... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17844) Activate japicmp-maven-plugin checks for @PublicEvolving between bug fix releases (x.y.u -> x.y.v)
[ https://issues.apache.org/jira/browse/FLINK-17844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann updated FLINK-17844: -- Fix Version/s: 1.11.0 > Activate japicmp-maven-plugin checks for @PublicEvolving between bug fix > releases (x.y.u -> x.y.v) > -- > > Key: FLINK-17844 > URL: https://issues.apache.org/jira/browse/FLINK-17844 > Project: Flink > Issue Type: New Feature > Components: Build System >Reporter: Till Rohrmann >Priority: Critical > Fix For: 1.11.0 > > > According to > https://lists.apache.org/thread.html/rc58099fb0e31d0eac951a7bbf7f8bda8b7b65c9ed0c04622f5333745%40%3Cdev.flink.apache.org%3E, > the community has decided to establish stricter API and binary stability > guarantees. Concretely, the community voted to guarantee API and binary > stability for {{@PublicEvolving}} annotated classes between bug fix release > (x.y.u -> x.y.v). > Hence, I would suggest to activate this check by adding a new > {{japicmp-maven-plugin}} entry into Flink's {{pom.xml}} which checks for > {{@PublicEvolving}} classes between bug fix releases. We might have to update > the release guide to also include updating this configuration entry. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] tillrohrmann commented on a change in pull request #12264: [FLINK-17558][netty] Release partitions asynchronously
tillrohrmann commented on a change in pull request #12264: URL: https://github.com/apache/flink/pull/12264#discussion_r428047035 ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/TaskExecutorPartitionLifecycleTest.java ## @@ -280,7 +273,65 @@ public void testClusterPartitionRelease() throws Exception { ); } - private void testPartitionRelease(PartitionTrackerSetup partitionTrackerSetup, TestAction testAction) throws Exception { + @Test + public void testBlockingLocalPartitionReleaseDoesNotBlockTaskExecutor() throws Exception { + BlockerSync sync = new BlockerSync(); + ResultPartitionManager blockingResultPartitionManager = new ResultPartitionManager() { + @Override + public void releasePartition(ResultPartitionID partitionId, Throwable cause) { + sync.blockNonInterruptible(); + super.releasePartition(partitionId, cause); + } + }; + + NettyShuffleEnvironment shuffleEnvironment = new NettyShuffleEnvironmentBuilder() + .setResultPartitionManager(blockingResultPartitionManager) + .setIoExecutor(java.util.concurrent.Executors.newFixedThreadPool(1)) Review comment: I would suggest to also shut this executor service down at the end of the test. It might be necessary to unblock the release operation for this. ## File path: flink-core/src/main/java/org/apache/flink/configuration/TaskManagerOptions.java ## @@ -490,6 +490,13 @@ + " size will be used. The exact size of JVM Overhead can be explicitly specified by setting the min/max" + " size to the same value."); + @Documentation.ExcludeFromDocumentation("This option just serves as a last-ditch escape hatch.") + public static final ConfigOption NUM_IO_THREADS = + key("taskmanager.io.threads.num") + .intType() + .defaultValue(2) + .withDescription("The number of threads to use for non-critical IO operations."); Review comment: We might be able to unify this configuration option with `ClusterOptions.CLUSTER_IO_EXECUTOR_POOL_SIZE`. ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskManagerServices.java ## @@ -265,10 +265,15 @@ public static TaskManagerServices fromConfiguration( // start the I/O manager, it will create some temp directories. final IOManager ioManager = new IOManagerAsync(taskManagerServicesConfiguration.getTmpDirPaths()); + final ExecutorService ioExecutor = Executors.newFixedThreadPool( Review comment: Can the `ioExecutor` also replace the `taskIOExecutor`? ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/io/network/NettyShuffleEnvironmentTest.java ## @@ -100,6 +105,27 @@ public void testRegisterTaskWithInsufficientBuffers() throws Exception { testRegisterTaskWithLimitedBuffers(bufferCount); } + @Test + public void testSlowIODoesNotBlockRelease() throws Exception { + BlockerSync sync = new BlockerSync(); Review comment: I guess a `OneShotLatch` would also work here if the test threads call the trigger on it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12244: [FLINK-17258][network] Fix couple of ITCases that were failing with enabled unaligned checkpoints
flinkbot edited a comment on pull request #12244: URL: https://github.com/apache/flink/pull/12244#issuecomment-630723509 ## CI report: * 3dcc9233af810b8be408665c0083fab404a2dea5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1833) * 7fa2068a283b9471384248c1bf301e3d406b5f48 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12267: [FLINK-17842][network] Fix performance regression in SpanningWrapper#clear
flinkbot edited a comment on pull request #12267: URL: https://github.com/apache/flink/pull/12267#issuecomment-631489829 ## CI report: * 0afb379748084b4aef0fdf51c57e24044dfc31df Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1949) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12264: [FLINK-17558][netty] Release partitions asynchronously
flinkbot edited a comment on pull request #12264: URL: https://github.com/apache/flink/pull/12264#issuecomment-631349883 ## CI report: * 19c5f57b94cc56b70002031618c32d9e6f68effb UNKNOWN * 9dbaf3094c0942b96a01060aba9d4ffbad9d1857 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1934) * bb313e40f5a72dbf20cd0a8b48267063fd4f00af UNKNOWN * eafbd98c812227cb7d9ce7158de1a23309855509 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints
flinkbot edited a comment on pull request #11906: URL: https://github.com/apache/flink/pull/11906#issuecomment-619214462 ## CI report: * 2e339ca93fcf4461ddb3502b49ab34083fc96cf6 UNKNOWN * 1310d3ed1bad9e2356a320128cac125e930831dc Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1932) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12243: [FLINK-17805][networ] Fix ArrayIndexOutOfBound for rotated input gate indexes
flinkbot edited a comment on pull request #12243: URL: https://github.com/apache/flink/pull/12243#issuecomment-630723410 ## CI report: * b956522108b0344ff004e859c0bc399dc8c38348 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1832) * a3be362324a56a5f9b118a09ea3552a3039acffe UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12179: [FLINK-16144] get client.timeout for the client, with a fallback to the akka.client…
flinkbot edited a comment on pull request #12179: URL: https://github.com/apache/flink/pull/12179#issuecomment-629283467 ## CI report: * beb5343f2d9e91881e3c02cd0ef19230f22e21a9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1496) * 439f5bb5f125322835886d5f9e12cb07a5625fcb UNKNOWN * 8725524bf20ae0a2a149be98090845b172c65cf6 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] rkhachatryan commented on a change in pull request #12244: [FLINK-17258][network] Fix couple of ITCases that were failing with enabled unaligned checkpoints
rkhachatryan commented on a change in pull request #12244: URL: https://github.com/apache/flink/pull/12244#discussion_r428043879 ## File path: flink-tests/src/test/java/org/apache/flink/test/classloading/ClassLoaderITCase.java ## @@ -300,7 +300,8 @@ public void testCheckpointingCustomKvStateJobWithCustomClassLoader() throws IOEx */ @Test public void testDisposeSavepointWithCustomKvState() throws Exception { - ClusterClient clusterClient = new MiniClusterClient(new Configuration(), miniClusterResource.getMiniCluster()); + Configuration configuration = new Configuration(); Review comment: nit: I guess it was extracted to disable unaligned checkpoints, but then CLI argument was used; so this variable can be inlined back. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12181: [FLINK-17645][runtime] Reset SafetyNetCloseableRegistry#REAPER_THREAD if it fails to start
flinkbot edited a comment on pull request #12181: URL: https://github.com/apache/flink/pull/12181#issuecomment-629344595 ## CI report: * fbefe16eb3f7769b6daf6cfe1fa26b7a0f7130a8 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1930) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-15534) YARNSessionCapacitySchedulerITCase#perJobYarnClusterWithParallelism failed due to NPE
[ https://issues.apache.org/jira/browse/FLINK-15534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112250#comment-17112250 ] Robert Metzger commented on FLINK-15534: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1929=logs=fc5181b0-e452-5c8f-68de-1097947f6483=6b04ca5f-0b52-511d-19c9-52bf0d9fbdfa > YARNSessionCapacitySchedulerITCase#perJobYarnClusterWithParallelism failed > due to NPE > - > > Key: FLINK-15534 > URL: https://issues.apache.org/jira/browse/FLINK-15534 > Project: Flink > Issue Type: Bug > Components: Deployment / YARN >Affects Versions: 1.11.0 >Reporter: Yu Li >Assignee: Yang Wang >Priority: Blocker > > As titled, travis run fails with below error: > {code} > 07:29:22.417 [ERROR] > perJobYarnClusterWithParallelism(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase) > Time elapsed: 16.263 s <<< ERROR! > java.lang.NullPointerException: > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:128) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.getApplicationResourceUsageReport(RMAppAttemptImpl.java:900) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.createAndGetApplicationReport(RMAppImpl.java:660) > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:930) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:273) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:507) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486) > at > org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase.perJobYarnClusterWithParallelism(YARNSessionCapacitySchedulerITCase.java:405) > Caused by: org.apache.hadoop.ipc.RemoteException: > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:128) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.getApplicationResourceUsageReport(RMAppAttemptImpl.java:900) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.createAndGetApplicationReport(RMAppImpl.java:660) > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplications(ClientRMService.java:930) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:273) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:507) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486) > at > org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase.perJobYarnClusterWithParallelism(YARNSessionCapacitySchedulerITCase.java:405) > {code} > https://api.travis-ci.org/v3/job/634588108/log.txt -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #12268: [FLINK-17375] Refactor travis_watchdog.sh into separate ci and azure scripts.
flinkbot commented on pull request #12268: URL: https://github.com/apache/flink/pull/12268#issuecomment-631497626 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 4ed6888375869e654816264124703e72439c6148 (Wed May 20 14:09:51 UTC 2020) **Warnings:** * Documentation files were touched, but no `.zh.md` files: Update Chinese documentation or file Jira ticket. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17375) Clean up CI system related scripts
[ https://issues.apache.org/jira/browse/FLINK-17375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17375: --- Labels: pull-request-available (was: ) > Clean up CI system related scripts > -- > > Key: FLINK-17375 > URL: https://issues.apache.org/jira/browse/FLINK-17375 > Project: Flink > Issue Type: Sub-task > Components: Build System, Build System / Azure Pipelines >Reporter: Robert Metzger >Assignee: Robert Metzger >Priority: Major > Labels: pull-request-available > > Once we have only one CI system in place for Flink (again), it makes sense to > clean up the available scripts: > - Separate "Azure-specific" from "CI-generic" files (names of files, methods, > build profiles) > - separate "log handling" from "build timeout" in "travis_watchdog" > - remove workarounds needed because of Travis limitations -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] rmetzger opened a new pull request #12268: [FLINK-17375] Refactor travis_watchdog.sh into separate ci and azure scripts.
rmetzger opened a new pull request #12268: URL: https://github.com/apache/flink/pull/12268 ## What is the purpose of the change Clean up the CI-related scripts in `tools/`. ## Brief change log For reviewing this change, I recommend starting from the `job-template.yml` file to see how the scripts are connected. - travis_watchdog.sh used to be a combination of things: test stage control (also python test invocation), debug artifact management (mostly uploading them), test timeout control. The biggest issue was how the python tests were integrated into that file. I moved the "watchdog" functionality into a separate file, and created a new `test_controller.sh`. - azure_controller.sh used to be the entry point for the CI system, controlling the compile stage. I moved most of the stuff into the `tools/ci/compile.sh` ## Verifying this change I have tested timing out builds (both for regular maven/surefire/java timeouts and python) to make sure the refactored watchdog works, and exit codes are properly forwarded. Once the PR has reached an acceptable state, I will also test the nightly builds on my personal Azure account to make sure the python wheels definition works. The separation of changes into separate commits is not optimal (some YARN changes are a bit unrelated in the refactoring commit) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12265: [FLINK-16922][table-common] Fix DecimalData.toUnscaledBytes() should be consistent with BigDecimla.unscaledValue.toByteArray()
flinkbot edited a comment on pull request #12265: URL: https://github.com/apache/flink/pull/12265#issuecomment-631389712 ## CI report: * 4f4662a0211a334a8033d317b57cd8755677c744 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1935) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12264: [FLINK-17558][netty] Release partitions asynchronously
flinkbot edited a comment on pull request #12264: URL: https://github.com/apache/flink/pull/12264#issuecomment-631349883 ## CI report: * 19c5f57b94cc56b70002031618c32d9e6f68effb UNKNOWN * 9dbaf3094c0942b96a01060aba9d4ffbad9d1857 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1934) * bb313e40f5a72dbf20cd0a8b48267063fd4f00af UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12267: [FLINK-17842][network] Fix performance regression in SpanningWrapper#clear
flinkbot commented on pull request #12267: URL: https://github.com/apache/flink/pull/12267#issuecomment-631489829 ## CI report: * 0afb379748084b4aef0fdf51c57e24044dfc31df UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-17844) Activate japicmp-maven-plugin checks for @PublicEvolving between bug fix releases (x.y.u -> x.y.v)
Till Rohrmann created FLINK-17844: - Summary: Activate japicmp-maven-plugin checks for @PublicEvolving between bug fix releases (x.y.u -> x.y.v) Key: FLINK-17844 URL: https://issues.apache.org/jira/browse/FLINK-17844 Project: Flink Issue Type: New Feature Components: Build System Reporter: Till Rohrmann According to https://lists.apache.org/thread.html/rc58099fb0e31d0eac951a7bbf7f8bda8b7b65c9ed0c04622f5333745%40%3Cdev.flink.apache.org%3E, the community has decided to establish stricter API and binary stability guarantees. Concretely, the community voted to guarantee API and binary stability for {{@PublicEvolving}} annotated classes between bug fix release (x.y.u -> x.y.v). Hence, I would suggest to activate this check by adding a new {{japicmp-maven-plugin}} entry into Flink's {{pom.xml}} which checks for {{@PublicEvolving}} classes between bug fix releases. We might have to update the release guide to also include updating this configuration entry. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
flinkbot edited a comment on pull request #12263: URL: https://github.com/apache/flink/pull/12263#issuecomment-631274882 ## CI report: * 0e1d9cde275d0717fb9b32f6d1a3aed600c33166 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1933) * 320f0a551c635e98c4aff4af6d853d3cf2681fee Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1944) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12179: [FLINK-16144] get client.timeout for the client, with a fallback to the akka.client…
flinkbot edited a comment on pull request #12179: URL: https://github.com/apache/flink/pull/12179#issuecomment-629283467 ## CI report: * beb5343f2d9e91881e3c02cd0ef19230f22e21a9 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1496) * 439f5bb5f125322835886d5f9e12cb07a5625fcb UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] pnowojski commented on a change in pull request #12243: [FLINK-17805][networ] Fix ArrayIndexOutOfBound for rotated input gate indexes
pnowojski commented on a change in pull request #12243: URL: https://github.com/apache/flink/pull/12243#discussion_r428017856 ## File path: flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/io/InputProcessorUtilTest.java ## @@ -58,4 +79,57 @@ public void testGenerateInputGateToChannelIndexOffsetMap() { assertEquals(0, inputGateToChannelIndexOffsetMap.get(ig1).intValue()); assertEquals(3, inputGateToChannelIndexOffsetMap.get(ig2).intValue()); } + + @Test + public void testCreateCheckpointedMultipleInputGate() throws Exception { + try (CloseableRegistry registry = new CloseableRegistry()) { + MockEnvironment environment = new MockEnvironmentBuilder().build(); + MockStreamTask streamTask = new MockStreamTaskBuilder(environment).build(); + StreamConfig streamConfig = new StreamConfig(environment.getJobConfiguration()); + streamConfig.setCheckpointMode(CheckpointingMode.EXACTLY_ONCE); + streamConfig.setUnalignedCheckpointsEnabled(true); + + // First input gate has index larger than the second + Collection[] inputGates = new Collection[] { + Collections.singletonList(new MockIndexedInputGate(1, 4)), + Collections.singletonList(new MockIndexedInputGate(0, 2)), + }; + + new MockChannelStateWriter() { Review comment: ops, that's a left over of some previous version. ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/InputProcessorUtil.java ## @@ -79,11 +80,26 @@ public static CheckpointedInputGate createCheckpointedInputGate( unionedInputGates[i] = InputGateUtil.createInputGate(inputGates[i].toArray(new IndexedInputGate[0])); } + IntStream numberOfInputChannelsPerGate = + Arrays + .stream(inputGates) + .flatMap(collection -> collection.stream()) + .sorted(Comparator.comparingInt(IndexedInputGate::getGateIndex)) + .mapToInt(InputGate::getNumberOfInputChannels); + Map inputGateToChannelIndexOffset = generateInputGateToChannelIndexOffsetMap(unionedInputGates); + // Note that numberOfInputChannelsPerGate and inputGateToChannelIndexOffset have a bit different Review comment: Hmmm, I'm not sure, as what if left input has input gates with indexes `0` and `3`, while the right input has indexes `1`, `2` and `4`? (I'm not sure if that's a valid scenario in the JobGraphGenerator) Left input would have a one instance of `UnionInputGate` over gates 0 and 3, while right input would have another instance with gates 1, 2 and 4. However we sort them, it would be somehow inconsistent? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-17780) Add task name to log statements of ChannelStateWriter
[ https://issues.apache.org/jira/browse/FLINK-17780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski closed FLINK-17780. -- Resolution: Fixed merged commit e7f7c5e into apache:master and as 90ece8c119 to release-1.11 > Add task name to log statements of ChannelStateWriter > - > > Key: FLINK-17780 > URL: https://issues.apache.org/jira/browse/FLINK-17780 > Project: Flink > Issue Type: Improvement > Components: Runtime / Checkpointing >Reporter: Arvid Heise >Assignee: Arvid Heise >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > Currently debugging unaligned checkpoint through logs is difficult as many > relevant log statements cannot be connected to the respective task. > > Add task name to the executor thread and to all method of ChannelStateWriter > (as they can be called from any other thread). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
flinkbot edited a comment on pull request #12263: URL: https://github.com/apache/flink/pull/12263#issuecomment-631274882 ## CI report: * 0e1d9cde275d0717fb9b32f6d1a3aed600c33166 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1933) * 320f0a551c635e98c4aff4af6d853d3cf2681fee UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
flinkbot edited a comment on pull request #12266: URL: https://github.com/apache/flink/pull/12266#issuecomment-631427812 ## CI report: * f33808f833da63c5563b48688053d49dedc46538 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1938) * d3268f7bfdf1dfa2c19dc2b38c80a4ee84a5f26c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1943) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11900: [FLINK-17284][jdbc][postgres] Support serial fields
flinkbot edited a comment on pull request #11900: URL: https://github.com/apache/flink/pull/11900#issuecomment-618914824 ## CI report: * 69bce2717b0279a894aa66d15cd4b9b72cd5a474 UNKNOWN * 17ee20d6efb84cca02a24b032c9504dcf03ff8a1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1872) * 4cf97b2be4447c2d2f94259ad559fefb79a0a727 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1942) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] pnowojski merged pull request #12205: [FLINK-17780][checkpointing] Add task name to log statements of ChannelStateWriter.
pnowojski merged pull request #12205: URL: https://github.com/apache/flink/pull/12205 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12267: [FLINK-17842][network] Fix performance regression in SpanningWrapper#clear
flinkbot commented on pull request #12267: URL: https://github.com/apache/flink/pull/12267#issuecomment-631471721 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 0afb379748084b4aef0fdf51c57e24044dfc31df (Wed May 20 13:24:59 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] pnowojski opened a new pull request #12267: [FLINK-17842][network] Fix performance regression in SpanningWrapper#clear
pnowojski opened a new pull request #12267: URL: https://github.com/apache/flink/pull/12267 For some reason the following commit: 54155744bd [FLINK-17547][task] Use RefCountedFile in SpanningWrapper caused a performance regression in various benchmarks. It's hard to tell why as none of the benchmarks are using spill files (records are too small), so our best guess is that combination of AtomicInteger inside RefCountedFile plus NullPointerException handling messed up with JIT ability to get rid of the memory barrier (from AtomicInteger) on the hot path. ## Verifying this change This change is covered by existing micro benchmarks. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (**yes** / no / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17842) Performance regression on 19.05.2020
[ https://issues.apache.org/jira/browse/FLINK-17842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17842: --- Labels: pull-request-available (was: ) > Performance regression on 19.05.2020 > > > Key: FLINK-17842 > URL: https://issues.apache.org/jira/browse/FLINK-17842 > Project: Flink > Issue Type: Bug > Components: Benchmarks >Affects Versions: 1.11.0 >Reporter: Piotr Nowojski >Assignee: Piotr Nowojski >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > > There is a noticeable performance regression in many benchmarks: > http://codespeed.dak8s.net:8000/timeline/?ben=serializerHeavyString=2 > http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.1000,1ms=2 > http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.100,100ms=2 > http://codespeed.dak8s.net:8000/timeline/?ben=globalWindow=2 > that happened on May 19th, probably between 260ef2c and 2f18138 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
flinkbot edited a comment on pull request #12266: URL: https://github.com/apache/flink/pull/12266#issuecomment-631427812 ## CI report: * f33808f833da63c5563b48688053d49dedc46538 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1938) * d3268f7bfdf1dfa2c19dc2b38c80a4ee84a5f26c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog
flinkbot edited a comment on pull request #12260: URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314 ## CI report: * 87d0b478bf38fc74639f8ac2c065e4e6d2fc2156 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1927) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12254: [FLINK-17802][kafka] Set offset commit only if group id is configured for new Kafka Table source
flinkbot edited a comment on pull request #12254: URL: https://github.com/apache/flink/pull/12254#issuecomment-630911224 ## CI report: * 6dd81680fa2182b19b2770f7338c3810aa1e4106 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1922) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…
flinkbot edited a comment on pull request #12230: URL: https://github.com/apache/flink/pull/12230#issuecomment-630205457 ## CI report: * 458ca449de6bb1007cd3e83f81fe09f973e7f6d3 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1926) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11900: [FLINK-17284][jdbc][postgres] Support serial fields
flinkbot edited a comment on pull request #11900: URL: https://github.com/apache/flink/pull/11900#issuecomment-618914824 ## CI report: * 69bce2717b0279a894aa66d15cd4b9b72cd5a474 UNKNOWN * 17ee20d6efb84cca02a24b032c9504dcf03ff8a1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1872) * 4cf97b2be4447c2d2f94259ad559fefb79a0a727 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17775) Cannot set batch job name when using collect
[ https://issues.apache.org/jira/browse/FLINK-17775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112176#comment-17112176 ] Nikola commented on FLINK-17775: Hi [~aljoscha], that seems to be my bad. I can indeed remove the last {{env.execute()}} and my job will work just fine. I am using flink in docker which we start through {{bin/taskmanager.sh}} and {{bin/jobmanager.sh}} However, regarding the issue it seems there is no way around it at the moment as the code you point to does not take anyhow a job name in consideration. On the other hand, When I am using both .collect() and env.execute() you said my job will run twice. However, I cannot see my job running twice (or 2 jobs running). I can see only one. > Cannot set batch job name when using collect > > > Key: FLINK-17775 > URL: https://issues.apache.org/jira/browse/FLINK-17775 > Project: Flink > Issue Type: Bug > Components: Runtime / Configuration >Affects Versions: 1.8.3, 1.9.3, 1.10.1 >Reporter: Nikola >Priority: Critical > > We have a batch job in the likes of this: > > {code:java} > ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); > DataSet dataSet = getDataSet(); > dataSet > .sortPartition(MyRow::getCount, Order.DESCENDING) > .setParallelism(1) > .flatMap(new MyFlatMap()) > .collect(); > env.execute("Job at " + Instant.now().toString()); > {code} > However, the job name in the flink UI is not "Job at " but the default > as if I didn't put anything. > > Is there way to have my own flink job name? > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] twalthr commented on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
twalthr commented on pull request #12263: URL: https://github.com/apache/flink/pull/12263#issuecomment-631460366 Thanks @tzulitai. I addressed the remaining comments and will merge this once the build gives green light. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] fpompermaier commented on a change in pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints
fpompermaier commented on a change in pull request #11906: URL: https://github.com/apache/flink/pull/11906#discussion_r427972573 ## File path: flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/catalog/AbstractJdbcCatalog.java ## @@ -126,31 +124,33 @@ public String getBaseUrl() { // -- retrieve PK constraint -- - protected UniqueConstraint getPrimaryKey(DatabaseMetaData metaData, String schema, String table) throws SQLException { + protected Optional getPrimaryKey(DatabaseMetaData metaData, String schema, String table) throws SQLException { // According to the Javadoc of java.sql.DatabaseMetaData#getPrimaryKeys, // the returned primary key columns are ordered by COLUMN_NAME, not by KEY_SEQ. // We need to sort them based on the KEY_SEQ value. ResultSet rs = metaData.getPrimaryKeys(null, schema, table); - List> columnsWithIndex = null; + Map keySeqColumnName = new HashMap<>(); String pkName = null; - while (rs.next()) { + while (rs.next()) { String columnName = rs.getString("COLUMN_NAME"); - pkName = rs.getString("PK_NAME"); + pkName = rs.getString("PK_NAME"); // all the PK_NAME should be the same int keySeq = rs.getInt("KEY_SEQ"); - if (columnsWithIndex == null) { - columnsWithIndex = new ArrayList<>(); - } - columnsWithIndex.add(new AbstractMap.SimpleEntry<>(Integer.valueOf(keySeq), columnName)); + keySeqColumnName.put(keySeq - 1, columnName); // KEY_SEQ is 1-based index } - if (columnsWithIndex != null) { - // sort columns by KEY_SEQ - columnsWithIndex.sort(Comparator.comparingInt(Map.Entry::getKey)); - List cols = columnsWithIndex.stream().map(Map.Entry::getValue).collect(Collectors.toList()); - return UniqueConstraint.primaryKey(pkName, cols); + List pkFields = Arrays.asList(new String[keySeqColumnName.size()]); // initialize size + keySeqColumnName.forEach(pkFields::set); Review comment: very neat and brilliant pattern..I learned something new :+1: This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-15947) Finish moving scala expression DSL to flink-table-api-scala
[ https://issues.apache.org/jira/browse/FLINK-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Wysakowicz closed FLINK-15947. Fix Version/s: (was: 1.12.0) 1.11.0 Release Note: Due to various issues with packages {{org.apache.flink.table.api.scala/java}} all classes from those packages were relocated. Moreover the scala expressions were moved to org.apache.flink.table.api as anounced in Flink 1.9. If you used one of * {{org.apache.flink.table.api.java.StreamTableEnvironment}} * {{org.apache.flink.table.api.scala.StreamTableEnvironment}} * {{org.apache.flink.table.api.java.BatchTableEnvironment}} * {{org.apache.flink.table.api.scala.BatchTableEnvironment}} and you do not convert to/from DataStream switch to: * {{org.apache.flink.table.api.TableEnvironment}} If you do convert to/from DataStream/DataSet change your imports to one of: * {{org.apache.flink.table.api.bridge.java.StreamTableEnvironment}} * {{org.apache.flink.table.api.bridge.scala.StreamTableEnvironment}} * {{org.apache.flink.table.api.bridge.java.BatchTableEnvironment}} * {{org.apache.flink.table.api.bridge.scala.BatchTableEnvironment}} For the Scala expressions use the import: {{org.apache.flink.table.api._}} instead of {{org.apache.flink.table.api.bridge.scala._}} additionally if you use Scala's implicit conversions to/from DataStream/DataSet import {{org.apache.flink.table.api.bridge.scala._}} instead of {{org.apache.flink.table.api.scala._}} Resolution: Fixed Implemented in master: 5f0183fe79d10ac36101f60f2589062a39630f96 4e56ca11fb275c72f4a70f8dd12ff71dc12983d3 1.11: 194b85b42749b03c5f1e79b5ae4377ab7230df36 87a0358deb51cf55f455d0dd4cfd6bf8690b2e2e > Finish moving scala expression DSL to flink-table-api-scala > --- > > Key: FLINK-15947 > URL: https://issues.apache.org/jira/browse/FLINK-15947 > Project: Flink > Issue Type: Improvement > Components: Table SQL / API >Reporter: Dawid Wysakowicz >Assignee: Dawid Wysakowicz >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > FLINK-13045 performed the first step of moving implicit conversions to a long > term package object. It also added release notes so that users have time to > adapt to the changes. > Now that it's two releases since that time, we can finish moving all the > intended conversions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] tzulitai commented on a change in pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
tzulitai commented on a change in pull request #12263: URL: https://github.com/apache/flink/pull/12263#discussion_r427970574 ## File path: flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/RowSerializer.java ## @@ -367,21 +367,21 @@ public int getVersion() { /** * A {@link TypeSerializerSnapshot} for RowSerializer. */ - // TODO not fully functional yet due to FLINK-17520 public static final class RowSerializerSnapshot extends CompositeTypeSerializerSnapshot { private static final int VERSION = 3; - private static final int VERSION_WITHOUT_ROW_KIND = 2; + private static final int LAST_VERSION_WITHOUT_ROW_KIND = 2; - private boolean legacyModeEnabled = false; + private int readVersion = VERSION; public RowSerializerSnapshot() { super(RowSerializer.class); } RowSerializerSnapshot(RowSerializer serializerInstance) { super(serializerInstance); + this.readVersion = serializerInstance.legacyModeEnabled ? LAST_VERSION_WITHOUT_ROW_KIND : VERSION; Review comment: I don't think this line is needed, unless I'm missing something in the tests. The read version should only ever be changed if this snapshot was created by restoring from a snapshot. In this case, this constructor is only ever used to create a new snapshot when checkpointing occurs - the read version should be the default value (`VALUE`). ## File path: flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/RowSerializer.java ## @@ -60,7 +60,7 @@ public static final int ROW_KIND_OFFSET = 2; - private static final long serialVersionUID = 2L; + private static final long serialVersionUID = 1L; // legacy, don't touch Review comment: nit: maybe add a comment that this can only be touched after support for 1.9 savepoints is ditched. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17828) AggregateReduceGroupingITCase fails on azure
[ https://issues.apache.org/jira/browse/FLINK-17828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112135#comment-17112135 ] Jark Wu commented on FLINK-17828: - Hi [~chesnay], there are 5 lines... I guess the mismatch is hided in the 5 lines. > AggregateReduceGroupingITCase fails on azure > > > Key: FLINK-17828 > URL: https://issues.apache.org/jira/browse/FLINK-17828 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.12.0 >Reporter: Dawid Wysakowicz >Priority: Blocker > Labels: test-stability > > failure: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1906=logs=e25d5e7e-2a9c-5589-4940-0b638d75a414=f83cd372-208c-5ec4-12a8-337462457129 > {code} > 2020-05-20T05:45:19.9368056Z [ERROR] Tests run: 16, Failures: 1, Errors: 0, > Skipped: 0, Time elapsed: 70.635 s <<< FAILURE! - in > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase > 2020-05-20T05:45:19.9400043Z [ERROR] > testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase) > Time elapsed: 17.264 s <<< FAILURE! > 2020-05-20T05:45:19.9401582Z java.lang.AssertionError: > 2020-05-20T05:45:19.9402509Z > 2020-05-20T05:45:19.9402933Z Results do not match for query: > 2020-05-20T05:45:19.9403278Z SELECT a6, b6, max(c6), count(d6), sum(e6) > FROM T6 GROUP BY a6, b6 > 2020-05-20T05:45:19.9407322Z > 2020-05-20T05:45:19.9407851Z Results > 2020-05-20T05:45:19.9408713Z == Correct Result - 5 == == Actual Result > - 5 == > 2020-05-20T05:45:19.9409059Z 0,1,null,1,10 0,1,null,1,10 > 2020-05-20T05:45:19.9409717Z 1,1,Hello1,1,101,1,Hello1,1,10 > 2020-05-20T05:45:19.9410018Z 10,1,Hello10,1,10 10,1,Hello10,1,10 > 2020-05-20T05:45:19.9410296Z 100,1,Hello100,1,10 > 100,1,Hello100,1,10 > 2020-05-20T05:45:19.9410596Z 1000,1,null,1,10 1000,1,null,1,10 > 2020-05-20T05:45:19.9410868Z 1,1,null,1,10 1,1,null,1,10 > 2020-05-20T05:45:19.9411184Z 10001,1,Hello10001,1,10 > 10001,1,Hello10001,1,10 > 2020-05-20T05:45:19.9411479Z 10002,1,Hello10002,1,10 > 10002,1,Hello10002,1,10 > 2020-05-20T05:45:19.9411786Z 10003,1,Hello10003,1,10 > 10003,1,Hello10003,1,10 > 2020-05-20T05:45:19.9412092Z 10004,1,Hello10004,1,10 > 10004,1,Hello10004,1,10 > 2020-05-20T05:45:19.9412379Z 10005,1,Hello10005,1,10 > 10005,1,Hello10005,1,10 > 2020-05-20T05:45:19.9412941Z 10006,1,Hello10006,1,10 > 10006,1,Hello10006,1,10 > 2020-05-20T05:45:19.9413241Z 10007,1,Hello10007,1,10 > 10007,1,Hello10007,1,10 > 2020-05-20T05:45:19.9413555Z 10008,1,Hello10008,1,10 > 10008,1,Hello10008,1,10 > 2020-05-20T05:45:19.9413977Z 10009,1,Hello10009,1,10 > 10009,1,Hello10009,1,10 > 2020-05-20T05:45:19.9414377Z 1001,1,Hello1001,1,10 > 1001,1,Hello1001,1,10 > 2020-05-20T05:45:19.9414686Z 10010,1,Hello10010,1,10 > 10010,1,Hello10010,1,10 > 2020-05-20T05:45:19.9415462Z 10011,1,Hello10011,1,10 > 10011,1,Hello10011,1,10 > 2020-05-20T05:45:19.9415783Z 10012,1,Hello10012,1,10 > 10012,1,Hello10012,1,10 > 2020-05-20T05:45:19.9416081Z 10013,1,Hello10013,1,10 > 10013,1,Hello10013,1,10 > 2020-05-20T05:45:19.9416926Z 10014,1,Hello10014,1,10 > 10014,1,Hello10014,1,10 > 2020-05-20T05:45:19.9417349Z 10015,1,Hello10015,1,10 > 10015,1,Hello10015,1,10 > 2020-05-20T05:45:19.9417664Z 10016,1,Hello10016,1,10 > 10016,1,Hello10016,1,10 > 2020-05-20T05:45:19.9418011Z 10017,1,Hello10017,1,10 > 10017,1,Hello10017,1,10 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] fpompermaier commented on pull request #11900: [FLINK-17284][jdbc][postgres] Support serial fields
fpompermaier commented on pull request #11900: URL: https://github.com/apache/flink/pull/11900#issuecomment-631442974 Updated PR to resolve conflicts This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17828) AggregateReduceGroupingITCase fails on azure
[ https://issues.apache.org/jira/browse/FLINK-17828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112129#comment-17112129 ] Chesnay Schepler commented on FLINK-17828: -- Am I blind or is there no visible difference between the two? This isn't just some encoding/newline/whitespace issue again is it? > AggregateReduceGroupingITCase fails on azure > > > Key: FLINK-17828 > URL: https://issues.apache.org/jira/browse/FLINK-17828 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.12.0 >Reporter: Dawid Wysakowicz >Priority: Blocker > Labels: test-stability > > failure: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1906=logs=e25d5e7e-2a9c-5589-4940-0b638d75a414=f83cd372-208c-5ec4-12a8-337462457129 > {code} > 2020-05-20T05:45:19.9368056Z [ERROR] Tests run: 16, Failures: 1, Errors: 0, > Skipped: 0, Time elapsed: 70.635 s <<< FAILURE! - in > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase > 2020-05-20T05:45:19.9400043Z [ERROR] > testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase) > Time elapsed: 17.264 s <<< FAILURE! > 2020-05-20T05:45:19.9401582Z java.lang.AssertionError: > 2020-05-20T05:45:19.9402509Z > 2020-05-20T05:45:19.9402933Z Results do not match for query: > 2020-05-20T05:45:19.9403278Z SELECT a6, b6, max(c6), count(d6), sum(e6) > FROM T6 GROUP BY a6, b6 > 2020-05-20T05:45:19.9407322Z > 2020-05-20T05:45:19.9407851Z Results > 2020-05-20T05:45:19.9408713Z == Correct Result - 5 == == Actual Result > - 5 == > 2020-05-20T05:45:19.9409059Z 0,1,null,1,10 0,1,null,1,10 > 2020-05-20T05:45:19.9409717Z 1,1,Hello1,1,101,1,Hello1,1,10 > 2020-05-20T05:45:19.9410018Z 10,1,Hello10,1,10 10,1,Hello10,1,10 > 2020-05-20T05:45:19.9410296Z 100,1,Hello100,1,10 > 100,1,Hello100,1,10 > 2020-05-20T05:45:19.9410596Z 1000,1,null,1,10 1000,1,null,1,10 > 2020-05-20T05:45:19.9410868Z 1,1,null,1,10 1,1,null,1,10 > 2020-05-20T05:45:19.9411184Z 10001,1,Hello10001,1,10 > 10001,1,Hello10001,1,10 > 2020-05-20T05:45:19.9411479Z 10002,1,Hello10002,1,10 > 10002,1,Hello10002,1,10 > 2020-05-20T05:45:19.9411786Z 10003,1,Hello10003,1,10 > 10003,1,Hello10003,1,10 > 2020-05-20T05:45:19.9412092Z 10004,1,Hello10004,1,10 > 10004,1,Hello10004,1,10 > 2020-05-20T05:45:19.9412379Z 10005,1,Hello10005,1,10 > 10005,1,Hello10005,1,10 > 2020-05-20T05:45:19.9412941Z 10006,1,Hello10006,1,10 > 10006,1,Hello10006,1,10 > 2020-05-20T05:45:19.9413241Z 10007,1,Hello10007,1,10 > 10007,1,Hello10007,1,10 > 2020-05-20T05:45:19.9413555Z 10008,1,Hello10008,1,10 > 10008,1,Hello10008,1,10 > 2020-05-20T05:45:19.9413977Z 10009,1,Hello10009,1,10 > 10009,1,Hello10009,1,10 > 2020-05-20T05:45:19.9414377Z 1001,1,Hello1001,1,10 > 1001,1,Hello1001,1,10 > 2020-05-20T05:45:19.9414686Z 10010,1,Hello10010,1,10 > 10010,1,Hello10010,1,10 > 2020-05-20T05:45:19.9415462Z 10011,1,Hello10011,1,10 > 10011,1,Hello10011,1,10 > 2020-05-20T05:45:19.9415783Z 10012,1,Hello10012,1,10 > 10012,1,Hello10012,1,10 > 2020-05-20T05:45:19.9416081Z 10013,1,Hello10013,1,10 > 10013,1,Hello10013,1,10 > 2020-05-20T05:45:19.9416926Z 10014,1,Hello10014,1,10 > 10014,1,Hello10014,1,10 > 2020-05-20T05:45:19.9417349Z 10015,1,Hello10015,1,10 > 10015,1,Hello10015,1,10 > 2020-05-20T05:45:19.9417664Z 10016,1,Hello10016,1,10 > 10016,1,Hello10016,1,10 > 2020-05-20T05:45:19.9418011Z 10017,1,Hello10017,1,10 > 10017,1,Hello10017,1,10 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] dawidwys closed pull request #12232: [FLINK-15947] Finish moving scala expression DSL to flink-table-api-scala
dawidwys closed pull request #12232: URL: https://github.com/apache/flink/pull/12232 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
flinkbot edited a comment on pull request #12263: URL: https://github.com/apache/flink/pull/12263#issuecomment-631274882 ## CI report: * 0e1d9cde275d0717fb9b32f6d1a3aed600c33166 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1933) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
flinkbot edited a comment on pull request #12266: URL: https://github.com/apache/flink/pull/12266#issuecomment-631427812 ## CI report: * f33808f833da63c5563b48688053d49dedc46538 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1938) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12232: [FLINK-15947] Finish moving scala expression DSL to flink-table-api-scala
flinkbot edited a comment on pull request #12232: URL: https://github.com/apache/flink/pull/12232#issuecomment-630355116 ## CI report: * bde94ff2e28c3b8d1b9e2b25c38afa24f8a558fd UNKNOWN * cffb27bb10c6d5da974483fbe8a32e562a0484e8 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1936) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12264: [FLINK-17558][netty] Release partitions asynchronously
flinkbot edited a comment on pull request #12264: URL: https://github.com/apache/flink/pull/12264#issuecomment-631349883 ## CI report: * 19c5f57b94cc56b70002031618c32d9e6f68effb UNKNOWN * 9dbaf3094c0942b96a01060aba9d4ffbad9d1857 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1934) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12096: [FLINK-16074][docs-zh] Translate the Overview page for State & Fault Tolerance into Chinese
flinkbot edited a comment on pull request #12096: URL: https://github.com/apache/flink/pull/12096#issuecomment-627237312 ## CI report: * d5ca90e68a87b35c5969ef79b099164d850381ff Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1918) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17828) AggregateReduceGroupingITCase fails on azure
[ https://issues.apache.org/jira/browse/FLINK-17828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112123#comment-17112123 ] Jark Wu commented on FLINK-17828: - cc [~godfreyhe]. I can' reproduce this error in my local. > AggregateReduceGroupingITCase fails on azure > > > Key: FLINK-17828 > URL: https://issues.apache.org/jira/browse/FLINK-17828 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.12.0 >Reporter: Dawid Wysakowicz >Priority: Blocker > Labels: test-stability > > failure: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1906=logs=e25d5e7e-2a9c-5589-4940-0b638d75a414=f83cd372-208c-5ec4-12a8-337462457129 > {code} > 2020-05-20T05:45:19.9368056Z [ERROR] Tests run: 16, Failures: 1, Errors: 0, > Skipped: 0, Time elapsed: 70.635 s <<< FAILURE! - in > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase > 2020-05-20T05:45:19.9400043Z [ERROR] > testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase) > Time elapsed: 17.264 s <<< FAILURE! > 2020-05-20T05:45:19.9401582Z java.lang.AssertionError: > 2020-05-20T05:45:19.9402509Z > 2020-05-20T05:45:19.9402933Z Results do not match for query: > 2020-05-20T05:45:19.9403278Z SELECT a6, b6, max(c6), count(d6), sum(e6) > FROM T6 GROUP BY a6, b6 > 2020-05-20T05:45:19.9407322Z > 2020-05-20T05:45:19.9407851Z Results > 2020-05-20T05:45:19.9408713Z == Correct Result - 5 == == Actual Result > - 5 == > 2020-05-20T05:45:19.9409059Z 0,1,null,1,10 0,1,null,1,10 > 2020-05-20T05:45:19.9409717Z 1,1,Hello1,1,101,1,Hello1,1,10 > 2020-05-20T05:45:19.9410018Z 10,1,Hello10,1,10 10,1,Hello10,1,10 > 2020-05-20T05:45:19.9410296Z 100,1,Hello100,1,10 > 100,1,Hello100,1,10 > 2020-05-20T05:45:19.9410596Z 1000,1,null,1,10 1000,1,null,1,10 > 2020-05-20T05:45:19.9410868Z 1,1,null,1,10 1,1,null,1,10 > 2020-05-20T05:45:19.9411184Z 10001,1,Hello10001,1,10 > 10001,1,Hello10001,1,10 > 2020-05-20T05:45:19.9411479Z 10002,1,Hello10002,1,10 > 10002,1,Hello10002,1,10 > 2020-05-20T05:45:19.9411786Z 10003,1,Hello10003,1,10 > 10003,1,Hello10003,1,10 > 2020-05-20T05:45:19.9412092Z 10004,1,Hello10004,1,10 > 10004,1,Hello10004,1,10 > 2020-05-20T05:45:19.9412379Z 10005,1,Hello10005,1,10 > 10005,1,Hello10005,1,10 > 2020-05-20T05:45:19.9412941Z 10006,1,Hello10006,1,10 > 10006,1,Hello10006,1,10 > 2020-05-20T05:45:19.9413241Z 10007,1,Hello10007,1,10 > 10007,1,Hello10007,1,10 > 2020-05-20T05:45:19.9413555Z 10008,1,Hello10008,1,10 > 10008,1,Hello10008,1,10 > 2020-05-20T05:45:19.9413977Z 10009,1,Hello10009,1,10 > 10009,1,Hello10009,1,10 > 2020-05-20T05:45:19.9414377Z 1001,1,Hello1001,1,10 > 1001,1,Hello1001,1,10 > 2020-05-20T05:45:19.9414686Z 10010,1,Hello10010,1,10 > 10010,1,Hello10010,1,10 > 2020-05-20T05:45:19.9415462Z 10011,1,Hello10011,1,10 > 10011,1,Hello10011,1,10 > 2020-05-20T05:45:19.9415783Z 10012,1,Hello10012,1,10 > 10012,1,Hello10012,1,10 > 2020-05-20T05:45:19.9416081Z 10013,1,Hello10013,1,10 > 10013,1,Hello10013,1,10 > 2020-05-20T05:45:19.9416926Z 10014,1,Hello10014,1,10 > 10014,1,Hello10014,1,10 > 2020-05-20T05:45:19.9417349Z 10015,1,Hello10015,1,10 > 10015,1,Hello10015,1,10 > 2020-05-20T05:45:19.9417664Z 10016,1,Hello10016,1,10 > 10016,1,Hello10016,1,10 > 2020-05-20T05:45:19.9418011Z 10017,1,Hello10017,1,10 > 10017,1,Hello10017,1,10 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
flinkbot commented on pull request #12266: URL: https://github.com/apache/flink/pull/12266#issuecomment-631427812 ## CI report: * f33808f833da63c5563b48688053d49dedc46538 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints
flinkbot edited a comment on pull request #11906: URL: https://github.com/apache/flink/pull/11906#issuecomment-619214462 ## CI report: * 2e339ca93fcf4461ddb3502b49ab34083fc96cf6 UNKNOWN * 66afd5253c17fae0a41bc38f41338a69268ca4ff Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1917) * 1310d3ed1bad9e2356a320128cac125e930831dc Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1932) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-17622) Remove useless switch for decimal in PostresCatalog
[ https://issues.apache.org/jira/browse/FLINK-17622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-17622. --- Fix Version/s: 1.11.0 Resolution: Fixed master (1.12.0): d7fc0d0620eae583ac71352e884e38affcc9f9e9 1.11.0: 6097d97a39877758d2729242186a19d86220e6ea > Remove useless switch for decimal in PostresCatalog > --- > > Key: FLINK-17622 > URL: https://issues.apache.org/jira/browse/FLINK-17622 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC >Reporter: Flavio Pompermaier >Assignee: Flavio Pompermaier >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > Remove the useless switch for decimal fields. The Postgres JDBC connector > translate them to numeric -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong commented on pull request #12090: [FLINK-17622][connectors / jdbc] Remove useless switch for decimal in PostgresCatalog
wuchong commented on pull request #12090: URL: https://github.com/apache/flink/pull/12090#issuecomment-631426574 Passed. Merging... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wuchong merged pull request #12090: [FLINK-17622][connectors / jdbc] Remove useless switch for decimal in PostgresCatalog
wuchong merged pull request #12090: URL: https://github.com/apache/flink/pull/12090 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
flinkbot commented on pull request #12266: URL: https://github.com/apache/flink/pull/12266#issuecomment-631422615 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit f33808f833da63c5563b48688053d49dedc46538 (Wed May 20 11:43:29 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-statefun] tzulitai opened a new pull request #115: [FLINK-17518] [e2e] Add remote module E2E
tzulitai opened a new pull request #115: URL: https://github.com/apache/flink-statefun/pull/115 This PR adds an E2E test that consists of a complete YAML-based remote module, with: - YAML auto-routable Kafka ingress - YAML generic Kafka egress - Remote functions using the Python SDK Since the coverage completely subsumes the routable Kafka E2E, the routable Kafka E2E test is removed in favor of this new E2E. Please see the class-level docs of `RemoteModuleE2E` for details of the test scenario. ## Brief change log - a8b4b7b Preliminary change to the Python SDK build script, to allow it to be run with Maven - 470dc52 Make the `-Prun-e2e-tests` build profile also build the Python SDK. This is required because the remote module E2E test requires the Python SDK wheels built. - ce76abd The test scenario of the new `RemoteModuleE2E` will result in output records to Kafka being written with indeterministic order. This commit adds a matcher to the `KafkaIOVerifier` that matches the consumed outputs with any order. - 31b616d The Python remote functions - 82e93b7 The actual E2E test implementation - 95a1366 Removes the routable Kafka E2E ## Verifying Travis should pass, or locally run `mvn clean install -Prun-e2e-tests` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17518) Add HTTP-based request reply protocol E2E test for Stateful Functions
[ https://issues.apache.org/jira/browse/FLINK-17518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17518: --- Labels: pull-request-available (was: ) > Add HTTP-based request reply protocol E2E test for Stateful Functions > - > > Key: FLINK-17518 > URL: https://issues.apache.org/jira/browse/FLINK-17518 > Project: Flink > Issue Type: Sub-task > Components: Stateful Functions >Reporter: Tzu-Li (Gordon) Tai >Assignee: Tzu-Li (Gordon) Tai >Priority: Blocker > Labels: pull-request-available > Fix For: statefun-2.0.1, statefun-2.1.0 > > > The E2E test should contain of a standalone deployed containerized remote > function, e.g. using the Python SDK + Flask, as well as a Flink Stateful > Functions cluster deployed using the {{StatefulFunctionsAppsContainers}} > utility. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17843) Check for RowKind when converting Row to expression
[ https://issues.apache.org/jira/browse/FLINK-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17843: --- Labels: pull-request-available (was: ) > Check for RowKind when converting Row to expression > --- > > Key: FLINK-17843 > URL: https://issues.apache.org/jira/browse/FLINK-17843 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / API >Affects Versions: 1.11.0 >Reporter: Dawid Wysakowicz >Assignee: Dawid Wysakowicz >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > A row ctor does not allow for a rowKind thus we should check if the rowKind > is set when converting from {{Row}} to expression. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] dawidwys opened a new pull request #12266: [FLINK-17843][table-api] Check the RowKind when converting a Row from object to an expression
dawidwys opened a new pull request #12266: URL: https://github.com/apache/flink/pull/12266 ## What is the purpose of the change Row constructor expression does not support a RowKind flag. It is possible to create only constant expressions of an INSERT row. This PR adds a check when converting a Row to an expression for the `RowKind`. ## Verifying this change Added tests in `org.apache.flink.table.expressions.ObjectToExpressionTest` ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-17518) Add HTTP-based request reply protocol E2E test for Stateful Functions
[ https://issues.apache.org/jira/browse/FLINK-17518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tzu-Li (Gordon) Tai reassigned FLINK-17518: --- Assignee: Tzu-Li (Gordon) Tai > Add HTTP-based request reply protocol E2E test for Stateful Functions > - > > Key: FLINK-17518 > URL: https://issues.apache.org/jira/browse/FLINK-17518 > Project: Flink > Issue Type: Sub-task > Components: Stateful Functions >Reporter: Tzu-Li (Gordon) Tai >Assignee: Tzu-Li (Gordon) Tai >Priority: Blocker > Fix For: statefun-2.0.1, statefun-2.1.0 > > > The E2E test should contain of a standalone deployed containerized remote > function, e.g. using the Python SDK + Flask, as well as a Flink Stateful > Functions cluster deployed using the {{StatefulFunctionsAppsContainers}} > utility. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12232: [FLINK-15947] Finish moving scala expression DSL to flink-table-api-scala
flinkbot edited a comment on pull request #12232: URL: https://github.com/apache/flink/pull/12232#issuecomment-630355116 ## CI report: * efc125913ce29720089ebc8ef13131da3c2fab8a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1931) * bde94ff2e28c3b8d1b9e2b25c38afa24f8a558fd UNKNOWN * cffb27bb10c6d5da974483fbe8a32e562a0484e8 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1936) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-17843) Check for RowKind when converting Row to expression
Dawid Wysakowicz created FLINK-17843: Summary: Check for RowKind when converting Row to expression Key: FLINK-17843 URL: https://issues.apache.org/jira/browse/FLINK-17843 Project: Flink Issue Type: Sub-task Components: Table SQL / API Affects Versions: 1.11.0 Reporter: Dawid Wysakowicz Assignee: Dawid Wysakowicz Fix For: 1.11.0 A row ctor does not allow for a rowKind thus we should check if the rowKind is set when converting from {{Row}} to expression. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
flinkbot edited a comment on pull request #12263: URL: https://github.com/apache/flink/pull/12263#issuecomment-631274882 ## CI report: * 5e0f9df0a404a5d88b8762238ec37b903b9f0e4b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1910) * 0e1d9cde275d0717fb9b32f6d1a3aed600c33166 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1933) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12232: [FLINK-15947] Finish moving scala expression DSL to flink-table-api-scala
flinkbot edited a comment on pull request #12232: URL: https://github.com/apache/flink/pull/12232#issuecomment-630355116 ## CI report: * efc125913ce29720089ebc8ef13131da3c2fab8a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1931) * bde94ff2e28c3b8d1b9e2b25c38afa24f8a558fd UNKNOWN * cffb27bb10c6d5da974483fbe8a32e562a0484e8 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12264: [FLINK-17558][netty] Release partitions asynchronously
flinkbot edited a comment on pull request #12264: URL: https://github.com/apache/flink/pull/12264#issuecomment-631349883 ## CI report: * 19c5f57b94cc56b70002031618c32d9e6f68effb UNKNOWN * 9dbaf3094c0942b96a01060aba9d4ffbad9d1857 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1934) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17842) Performance regression on 19.05.2020
[ https://issues.apache.org/jira/browse/FLINK-17842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112056#comment-17112056 ] Piotr Nowojski commented on FLINK-17842: Those are the commits in suspected range: {noformat} 2f18138df2 [26 hours ago] [FLINK-17809][dist] Quote classpath and FLINK_CONF_DIR [Chesnay Schepler] 31ec497b96 [2 days ago] [FLINK-17763][dist] Properly handle log properties and spaces in scala-shell.sh [Chesnay Schepler] 2aacb62c29 [13 days ago] [FLINK-17547][task] Implement getUnconsumedSegment for spilled buffers [Roman Khachatryan] 54155744bd [13 days ago] [FLINK-17547][task] Use RefCountedFile in SpanningWrapper (todo: merge with next?) [Roman Khachatryan] 2fcc1fca7c [13 days ago] [FLINK-17547][task][hotfix] Move RefCountedFile to flink-core to use it in SpanningWrapper [Roman Khachatryan] 179de29f09 [13 days ago] [FLINK-17547][task][hotfix] Extract RefCountedFileWithStream from RefCountedFile Motivation: use RefCountedFile for reading as well. [Roman Khachatryan] 37f441a2fc [2 days ago] [FLINK-17547][task] Use iterator for unconsumed buffers. Motivation: support spilled records Changes: 1. change SpillingAdaptiveSpanningRecordDeserializer.getUnconsumedBuffer signature 2. adapt channel state persistence to new types [Roman Khachatryan] 824100e146 [8 days ago] [FLINK-17547][task][hotfix] Extract methods from RecordsDeserializer [Roman Khachatryan] 67d3eae6f1 [8 days ago] [FLINK-17547][task][hotfix] Fix compiler warnings in NonSpanningWrapper [Roman Khachatryan] d7b29f7bb5 [2 weeks ago] [FLINK-17547][task][hotfix] Extract SpanningWrapper from SpillingAdaptiveSpanningRecordDeserializer (static inner class). As it is, no logical changes. [Roman Khachatryan] 6e3c5abf7b [2 weeks ago] [FLINK-17547][task][hotfix] Extract NonSpanningWrapper from SpillingAdaptiveSpanningRecordDeserializer (static inner class) As it is, no logical changes. [Roman Khachatryan] 8548d37df6 [13 days ago] [FLINK-17547][task][hotfix] Improve error handling 1 catch one more invalid input in DataOutputSerializer.write 2 more informative error messages [Roman Khachatryan] {noformat} It means the regression was probably caused by FLINK-17547 CC [~roman_khachatryan] > Performance regression on 19.05.2020 > > > Key: FLINK-17842 > URL: https://issues.apache.org/jira/browse/FLINK-17842 > Project: Flink > Issue Type: Bug > Components: Benchmarks >Affects Versions: 1.11.0 >Reporter: Piotr Nowojski >Assignee: Piotr Nowojski >Priority: Blocker > Fix For: 1.11.0 > > > There is a noticeable performance regression in many benchmarks: > http://codespeed.dak8s.net:8000/timeline/?ben=serializerHeavyString=2 > http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.1000,1ms=2 > http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.100,100ms=2 > http://codespeed.dak8s.net:8000/timeline/?ben=globalWindow=2 > that happened on May 19th, probably between 260ef2c and 2f18138 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17842) Performance regression on 19.05.2020
[ https://issues.apache.org/jira/browse/FLINK-17842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski updated FLINK-17842: --- Description: There is a noticeable performance regression in many benchmarks: http://codespeed.dak8s.net:8000/timeline/?ben=serializerHeavyString=2 http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.1000,1ms=2 http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.100,100ms=2 http://codespeed.dak8s.net:8000/timeline/?ben=globalWindow=2 that happened on May 19th, probably between 260ef2c and 2f18138 was: There is a noticeable performance regression in many benchmarks: http://codespeed.dak8s.net:8000/timeline/?ben=serializerHeavyString=2 http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.1000,1ms=2 http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.100,100ms=2 http://codespeed.dak8s.net:8000/timeline/?ben=globalWindow=2 that happened on May 19th. > Performance regression on 19.05.2020 > > > Key: FLINK-17842 > URL: https://issues.apache.org/jira/browse/FLINK-17842 > Project: Flink > Issue Type: Bug > Components: Benchmarks >Affects Versions: 1.11.0 >Reporter: Piotr Nowojski >Priority: Blocker > Fix For: 1.11.0 > > > There is a noticeable performance regression in many benchmarks: > http://codespeed.dak8s.net:8000/timeline/?ben=serializerHeavyString=2 > http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.1000,1ms=2 > http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.100,100ms=2 > http://codespeed.dak8s.net:8000/timeline/?ben=globalWindow=2 > that happened on May 19th, probably between 260ef2c and 2f18138 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-17842) Performance regression on 19.05.2020
[ https://issues.apache.org/jira/browse/FLINK-17842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Piotr Nowojski reassigned FLINK-17842: -- Assignee: Piotr Nowojski > Performance regression on 19.05.2020 > > > Key: FLINK-17842 > URL: https://issues.apache.org/jira/browse/FLINK-17842 > Project: Flink > Issue Type: Bug > Components: Benchmarks >Affects Versions: 1.11.0 >Reporter: Piotr Nowojski >Assignee: Piotr Nowojski >Priority: Blocker > Fix For: 1.11.0 > > > There is a noticeable performance regression in many benchmarks: > http://codespeed.dak8s.net:8000/timeline/?ben=serializerHeavyString=2 > http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.1000,1ms=2 > http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.100,100ms=2 > http://codespeed.dak8s.net:8000/timeline/?ben=globalWindow=2 > that happened on May 19th, probably between 260ef2c and 2f18138 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-17842) Performance regression on 19.05.2020
Piotr Nowojski created FLINK-17842: -- Summary: Performance regression on 19.05.2020 Key: FLINK-17842 URL: https://issues.apache.org/jira/browse/FLINK-17842 Project: Flink Issue Type: Bug Components: Benchmarks Affects Versions: 1.11.0 Reporter: Piotr Nowojski Fix For: 1.11.0 There is a noticeable performance regression in many benchmarks: http://codespeed.dak8s.net:8000/timeline/?ben=serializerHeavyString=2 http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.1000,1ms=2 http://codespeed.dak8s.net:8000/timeline/?ben=networkThroughput.100,100ms=2 http://codespeed.dak8s.net:8000/timeline/?ben=globalWindow=2 that happened on May 19th. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12232: [FLINK-15947] Finish moving scala expression DSL to flink-table-api-scala
flinkbot edited a comment on pull request #12232: URL: https://github.com/apache/flink/pull/12232#issuecomment-630355116 ## CI report: * efc125913ce29720089ebc8ef13131da3c2fab8a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1931) * bde94ff2e28c3b8d1b9e2b25c38afa24f8a558fd UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12265: [FLINK-16922][table-common] Fix DecimalData.toUnscaledBytes() should be consistent with BigDecimla.unscaledValue.toByteArray()
flinkbot edited a comment on pull request #12265: URL: https://github.com/apache/flink/pull/12265#issuecomment-631389712 ## CI report: * 4f4662a0211a334a8033d317b57cd8755677c744 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1935) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints
flinkbot edited a comment on pull request #11906: URL: https://github.com/apache/flink/pull/11906#issuecomment-619214462 ## CI report: * 2e339ca93fcf4461ddb3502b49ab34083fc96cf6 UNKNOWN * 0e1a42d1f6f38f2e1e92db036b55a5f54a49402f Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1864) * 66afd5253c17fae0a41bc38f41338a69268ca4ff Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1917) * 1310d3ed1bad9e2356a320128cac125e930831dc Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1932) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12090: [FLINK-17622][connectors / jdbc] Remove useless switch for decimal in PostgresCatalog
flinkbot edited a comment on pull request #12090: URL: https://github.com/apache/flink/pull/12090#issuecomment-626999634 ## CI report: * 29252f270a654406deb02ed0f0e552605c476b68 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1915) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12265: [FLINK-16922][table-common] Fix DecimalData.toUnscaledBytes() should be consistent with BigDecimla.unscaledValue.toByteArray()
flinkbot commented on pull request #12265: URL: https://github.com/apache/flink/pull/12265#issuecomment-631389712 ## CI report: * 4f4662a0211a334a8033d317b57cd8755677c744 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12252: [FLINK-17802][kafka] Set offset commit only if group id is configured for new Kafka Table source
flinkbot edited a comment on pull request #12252: URL: https://github.com/apache/flink/pull/12252#issuecomment-630874045 ## CI report: * 446be48d8b11fc0ae4a6d996a58e4558000900fc Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1921) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12232: [FLINK-15947] Finish moving scala expression DSL to flink-table-api-scala
flinkbot edited a comment on pull request #12232: URL: https://github.com/apache/flink/pull/12232#issuecomment-630355116 ## CI report: * 6df9602ad51db30a39d5a8c6ed6e750025ff7429 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1919) * efc125913ce29720089ebc8ef13131da3c2fab8a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1931) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
flinkbot edited a comment on pull request #12263: URL: https://github.com/apache/flink/pull/12263#issuecomment-631274882 ## CI report: * 5e0f9df0a404a5d88b8762238ec37b903b9f0e4b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1910) * 0e1d9cde275d0717fb9b32f6d1a3aed600c33166 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12181: [FLINK-17645][runtime] Reset SafetyNetCloseableRegistry#REAPER_THREAD if it fails to start
flinkbot edited a comment on pull request #12181: URL: https://github.com/apache/flink/pull/12181#issuecomment-629344595 ## CI report: * 05e0b2b0379e0b05c62631147b82711c32f11fcb Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1920) * fbefe16eb3f7769b6daf6cfe1fa26b7a0f7130a8 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1930) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17351) CheckpointCoordinator and CheckpointFailureManager ignores checkpoint timeouts
[ https://issues.apache.org/jira/browse/FLINK-17351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112026#comment-17112026 ] Roman Khachatryan commented on FLINK-17351: --- I think we need to increment the counter for any checkpoint failure that is not caused by another checkpoint (like TOO_MANY_CONCURRENT_CHECKPOINTS) and a failure with a wider scope (like TASK_FAILURE). > CheckpointCoordinator and CheckpointFailureManager ignores checkpoint timeouts > -- > > Key: FLINK-17351 > URL: https://issues.apache.org/jira/browse/FLINK-17351 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.9.2, 1.10.0 >Reporter: Piotr Nowojski >Assignee: Yuan Mei >Priority: Critical > Fix For: 1.11.0 > > > As described in point 2: > https://issues.apache.org/jira/browse/FLINK-17327?focusedCommentId=17090576=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17090576 > (copy of description from above linked comment): > The logic in how {{CheckpointCoordinator}} handles checkpoint timeouts is > broken. In your [~qinjunjerry] examples, your job should have failed after > first checkpoint failure, but checkpoints were time outing on > CheckpointCoordinator after 5 seconds, before {{FlinkKafkaProducer}} was > detecting Kafka failure after 2 minutes. Those timeouts were not checked > against {{setTolerableCheckpointFailureNumber(...)}} limit, so the job was > keep going with many timed out checkpoints. Now funny thing happens: > FlinkKafkaProducer detects Kafka failure. Funny thing is that it depends > where the failure was detected: > a) on processing record? no problem, job will failover immediately once > failure is detected (in this example after 2 minutes) > b) on checkpoint? heh, the failure is reported to {{CheckpointCoordinator}} > *and gets ignored, as PendingCheckpoint has already been discarded 2 minutes > ago* :) So theoretically the checkpoints can keep failing forever and the job > will not restart automatically, unless something else fails. > Even more funny things can happen if we mix FLINK-17350 . or b) with > intermittent external system failure. Sink reports an exception, transaction > was lost/aborted, Sink is in failed state, but if there will be a happy > coincidence that it manages to accept further records, this exception can be > lost and all of the records in those failed checkpoints will be lost forever > as well. In all of the examples that [~qinjunjerry] posted it hasn't > happened. {{FlinkKafkaProducer}} was not able to recover after the initial > failure and it was keep throwing exceptions until the job finally failed (but > much later then it should have). And that's not guaranteed anywhere. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink-web] klion26 commented on a change in pull request #247: [FLINK-13683] Translate "Code Style - Component Guide" page into Chinese
klion26 commented on a change in pull request #247: URL: https://github.com/apache/flink-web/pull/247#discussion_r427758065 ## File path: contributing/code-style-and-quality-components.zh.md ## @@ -48,96 +48,96 @@ How to name config keys: } ``` -* The resulting config keys should hence be: +* 因此生成的配置键应该是: - **NOT** `"taskmanager.detailed.network.metrics"` + **不是** `"taskmanager.detailed.network.metrics"` - **But rather** `"taskmanager.network.detailed-metrics"` + **而是** `"taskmanager.network.detailed-metrics"` -### Connectors +### 连接器 -Connectors are historically hard to implement and need to deal with many aspects of threading, concurrency, and checkpointing. +连接器历来很难实现,需要处理多线程、并发和检查点的许多方面。 -As part of [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface) we are working on making this much simpler for sources. New sources should not have to deal with any aspect of concurrency/threading and checkpointing any more. +作为 [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface) 的一部分,我们正在努力让这些源(source)更加简单。新的源应该不再处理并发/线程和检查点的任何方面。 -A similar FLIP can be expected for sinks in the near future. +预计在不久的将来,会有类似针对 sink 的 FLIP。 -### Examples +### 示例 -Examples should be self-contained and not require systems other than Flink to run. Except for examples that show how to use specific connectors, like the Kafka connector. Sources/sinks that are ok to use are `StreamExecutionEnvironment.socketTextStream`, which should not be used in production but is quite handy for exploring how things work, and file-based sources/sinks. (For streaming, there is the continuous file source) +示例应该是自包含的,不需要运行 Flink 以外的系统。除了显示如何使用具体的连接器的示例,比如 Kafka 连接器。源/接收器可以使用 `StreamExecutionEnvironment.socketTextStream`,这个不应该在生产中使用,但对于研究示例如何运行的是相当方便的,以及基于文件的源/接收器。(对于流,有连续的文件源) Review comment: ```suggestion 示例应该是自包含的,不需要运行 Flink 以外的系统。除了显示如何使用具体的连接器的示例,比如 Kafka 连接器。源/接收器可以使用 `StreamExecutionEnvironment.socketTextStream`,这个不应该在生产中使用,但对于研究示例如何运行是相当方便的,以及基于文件的源/接收器。(对于流,有连续的文件源) ``` ## File path: contributing/code-style-and-quality-components.zh.md ## @@ -48,96 +48,96 @@ How to name config keys: } ``` -* The resulting config keys should hence be: +* 因此生成的配置键应该是: Review comment: 这句话连起来不太通顺,最简单的就是把 ”应该是“ 改成 ”应该”,另外可以想想是否有更好的翻译,不一定要一一对应,整个意思一样就行了。 ## File path: contributing/code-style-and-quality-components.zh.md ## @@ -48,96 +48,96 @@ How to name config keys: } ``` -* The resulting config keys should hence be: +* 因此生成的配置键应该是: - **NOT** `"taskmanager.detailed.network.metrics"` + **不是** `"taskmanager.detailed.network.metrics"` - **But rather** `"taskmanager.network.detailed-metrics"` + **而是** `"taskmanager.network.detailed-metrics"` -### Connectors +### 连接器 -Connectors are historically hard to implement and need to deal with many aspects of threading, concurrency, and checkpointing. +连接器历来很难实现,需要处理多线程、并发和检查点的许多方面。 -As part of [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface) we are working on making this much simpler for sources. New sources should not have to deal with any aspect of concurrency/threading and checkpointing any more. +作为 [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface) 的一部分,我们正在努力让这些源(source)更加简单。新的源应该不再处理并发/线程和检查点的任何方面。 Review comment: "新的源应该不再处理并发/线程和检查点的任何方面。" 这一句中体现“not have to”是不是会更好一些 ## File path: contributing/code-style-and-quality-components.zh.md ## @@ -48,96 +48,96 @@ How to name config keys: } ``` -* The resulting config keys should hence be: +* 因此生成的配置键应该是: - **NOT** `"taskmanager.detailed.network.metrics"` + **不是** `"taskmanager.detailed.network.metrics"` - **But rather** `"taskmanager.network.detailed-metrics"` + **而是** `"taskmanager.network.detailed-metrics"` -### Connectors +### 连接器 -Connectors are historically hard to implement and need to deal with many aspects of threading, concurrency, and checkpointing. +连接器历来很难实现,需要处理多线程、并发和检查点的许多方面。 -As part of [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface) we are working on making this much simpler for sources. New sources should not have to deal with any aspect of concurrency/threading and checkpointing any more. +作为 [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface) 的一部分,我们正在努力让这些源(source)更加简单。新的源应该不再处理并发/线程和检查点的任何方面。 -A similar FLIP can be expected for sinks in the near future. +预计在不久的将来,会有类似针对 sink 的 FLIP。 -### Examples +### 示例 -Examples should be self-contained and not require systems other than Flink to run. Except for examples that show how to use specific connectors, like the Kafka connector. Sources/sinks that are ok to use are `StreamExecutionEnvironment.socketTextStream`, which
[GitHub] [flink] twalthr commented on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer
twalthr commented on pull request #12263: URL: https://github.com/apache/flink/pull/12263#issuecomment-631385082 Thanks for the feedback @tzulitai. After some offline discussion, the tests were partially incorrect. I hope the PR is in a better shape now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17840) Add document for new Kafka connector
[ https://issues.apache.org/jira/browse/FLINK-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danny Chen updated FLINK-17840: --- Parent: (was: FLINK-17833) Issue Type: Task (was: Sub-task) > Add document for new Kafka connector > > > Key: FLINK-17840 > URL: https://issues.apache.org/jira/browse/FLINK-17840 > Project: Flink > Issue Type: Task >Reporter: Danny Chen >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17841) Add document for new ElasticSearch connector
[ https://issues.apache.org/jira/browse/FLINK-17841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danny Chen updated FLINK-17841: --- Parent: (was: FLINK-17833) Issue Type: Task (was: Sub-task) > Add document for new ElasticSearch connector > > > Key: FLINK-17841 > URL: https://issues.apache.org/jira/browse/FLINK-17841 > Project: Flink > Issue Type: Task >Reporter: Danny Chen >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17839) Add document for new Hbase connector
[ https://issues.apache.org/jira/browse/FLINK-17839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danny Chen updated FLINK-17839: --- Parent: (was: FLINK-17833) Issue Type: Task (was: Sub-task) > Add document for new Hbase connector > > > Key: FLINK-17839 > URL: https://issues.apache.org/jira/browse/FLINK-17839 > Project: Flink > Issue Type: Task >Reporter: Danny Chen >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17838) Add document for new JDBC connector
[ https://issues.apache.org/jira/browse/FLINK-17838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danny Chen updated FLINK-17838: --- Parent: (was: FLINK-17833) Issue Type: Task (was: Sub-task) > Add document for new JDBC connector > --- > > Key: FLINK-17838 > URL: https://issues.apache.org/jira/browse/FLINK-17838 > Project: Flink > Issue Type: Task >Reporter: Danny Chen >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17837) Add document for Hive DDL and DML
[ https://issues.apache.org/jira/browse/FLINK-17837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danny Chen updated FLINK-17837: --- Parent: (was: FLINK-17833) Issue Type: Task (was: Sub-task) > Add document for Hive DDL and DML > - > > Key: FLINK-17837 > URL: https://issues.apache.org/jira/browse/FLINK-17837 > Project: Flink > Issue Type: Task >Reporter: Danny Chen >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17841) Add document for new ElasticSearch connector
[ https://issues.apache.org/jira/browse/FLINK-17841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danny Chen closed FLINK-17841. -- Resolution: Duplicate > Add document for new ElasticSearch connector > > > Key: FLINK-17841 > URL: https://issues.apache.org/jira/browse/FLINK-17841 > Project: Flink > Issue Type: Sub-task >Reporter: Danny Chen >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)