[jira] [Created] (FLINK-22327) NPE exception happens if it throws exception in finishBundle during job shutdown
Dian Fu created FLINK-22327: --- Summary: NPE exception happens if it throws exception in finishBundle during job shutdown Key: FLINK-22327 URL: https://issues.apache.org/jira/browse/FLINK-22327 Project: Flink Issue Type: Bug Components: API / Python Reporter: Dian Fu Assignee: Dian Fu Fix For: 1.13.0, 1.12.3 Currently, if it throws exceptions in finishBundle during job shutdown, NPE exception may happen if time-based finish bundle is scheduled. It caused the actual exception isn't propagate. This makes users very difficult to trouble shot the problem. See http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/PyFlink-called-already-closed-and-NullPointerException-td42997.html for more details. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22326) Job contains Iterate Operator always fails on Checkpoint
Lu Niu created FLINK-22326: -- Summary: Job contains Iterate Operator always fails on Checkpoint Key: FLINK-22326 URL: https://issues.apache.org/jira/browse/FLINK-22326 Project: Flink Issue Type: Improvement Components: Runtime / Checkpointing Affects Versions: 1.11.1 Reporter: Lu Niu Attachments: Screen Shot 2021-04-16 at 12.40.34 PM.png, Screen Shot 2021-04-16 at 12.43.38 PM.png Job contains Iterate Operator will always fail on checkpoint. How to reproduce: [https://gist.github.com/qqibrow/f297babadb0bb662ee398b9088870785] this is based on [https://github.com/apache/flink/blob/master/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/iteration/IterateExample.java,] but a few line difference: 1. Make maxWaitTime large enough when create IterativeStream 2. No output back to Itertive Source Result: The same code is able to checkpoint in 1.9.1 !image-2021-04-16-12-45-23-624.png! but always fail on checkpoint in 1.11 !image-2021-04-16-12-41-35-002.png! It seems -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22325) Channel state iterator is accessed concurrently without proper synchronization
Roman Khachatryan created FLINK-22325: - Summary: Channel state iterator is accessed concurrently without proper synchronization Key: FLINK-22325 URL: https://issues.apache.org/jira/browse/FLINK-22325 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing Reporter: Roman Khachatryan Assignee: Roman Khachatryan Fix For: 1.13.0 ChannelStateWriter adds input/output data that is written by a dedicated thread. The data is passed as CloseableIterator. In some cases, iterator.close can be called from the task thread which can lead to double release of buffers. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22324) Backport FLINK-18071 for 1.12.x
Stephan Ewen created FLINK-22324: Summary: Backport FLINK-18071 for 1.12.x Key: FLINK-22324 URL: https://issues.apache.org/jira/browse/FLINK-22324 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.12.2 Reporter: Stephan Ewen Assignee: Stephan Ewen Fix For: 1.12.3 See FLINK-18071 - this issue only tracks the backport to allow closing the blocker issue for 1.13.0. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22323) JobEdges Typos
lqjacklee created FLINK-22323: - Summary: JobEdges Typos Key: FLINK-22323 URL: https://issues.apache.org/jira/browse/FLINK-22323 Project: Flink Issue Type: Bug Components: Runtime / Task Reporter: lqjacklee Fix For: 1.12.3 -- This message was sent by Atlassian Jira (v8.3.4#803005)
confluence permission apply
Hi, I want to contribute to Flink. Would someone please give me the confluence permission ? My Confluence ID is roc-marshal. Full name is RocMarshal. My JIRA ID is RocMarshal. Thank you . Best, Roc.
[jira] [Created] (FLINK-22322) Build hang when compiling Web UI
Dawid Wysakowicz created FLINK-22322: Summary: Build hang when compiling Web UI Key: FLINK-22322 URL: https://issues.apache.org/jira/browse/FLINK-22322 Project: Flink Issue Type: Bug Components: Build System Affects Versions: 1.13.0 Reporter: Dawid Wysakowicz https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16667=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb=3560 {code} [INFO] --- frontend-maven-plugin:1.6:install-node-and-npm (install node and npm) @ flink-runtime-web_2.11 --- [INFO] Installing node version v10.9.0 [INFO] Downloading https://nodejs.org/dist/v10.9.0/node-v10.9.0-linux-x64.tar.gz to /__w/2/.m2/repository/com/github/eirslett/node/10.9.0/node-10.9.0-linux-x64.tar.gz [INFO] No proxies configured [INFO] No proxy was configured, downloading directly [INFO] Unpacking /__w/2/.m2/repository/com/github/eirslett/node/10.9.0/node-10.9.0-linux-x64.tar.gz into /__w/2/s/flink-runtime-web/web-dashboard/node/tmp [INFO] Copying node binary from /__w/2/s/flink-runtime-web/web-dashboard/node/tmp/node-v10.9.0-linux-x64/bin/node to /__w/2/s/flink-runtime-web/web-dashboard/node/node [INFO] Extracting NPM [INFO] Installed node locally. [INFO] [INFO] --- frontend-maven-plugin:1.6:npm (npm install) @ flink-runtime-web_2.11 --- [INFO] Running 'npm ci --cache-max=0 --no-save' in /__w/2/s/flink-runtime-web/web-dashboard ##[error]The operation was canceled. {code} Could be a transient Azure error -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22321) Drop old user-defined function stack
Timo Walther created FLINK-22321: Summary: Drop old user-defined function stack Key: FLINK-22321 URL: https://issues.apache.org/jira/browse/FLINK-22321 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Timo Walther Assignee: Timo Walther FLIP-65 functions have been in place for a couple of releases. It is time to drop them once we drop the legacy planner. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22318) Support RENAME column name for ALTER TABLE statement
Jark Wu created FLINK-22318: --- Summary: Support RENAME column name for ALTER TABLE statement Key: FLINK-22318 URL: https://issues.apache.org/jira/browse/FLINK-22318 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Jark Wu -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22319) Support RESET table option for ALTER TABLE statement
Jark Wu created FLINK-22319: --- Summary: Support RESET table option for ALTER TABLE statement Key: FLINK-22319 URL: https://issues.apache.org/jira/browse/FLINK-22319 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Jark Wu -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22317) Support DROP column/constraint/watermark for ALTER TABLE statement
Jark Wu created FLINK-22317: --- Summary: Support DROP column/constraint/watermark for ALTER TABLE statement Key: FLINK-22317 URL: https://issues.apache.org/jira/browse/FLINK-22317 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Jark Wu -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22320) Add documentation for new introduced ALTER TABLE statements
Jark Wu created FLINK-22320: --- Summary: Add documentation for new introduced ALTER TABLE statements Key: FLINK-22320 URL: https://issues.apache.org/jira/browse/FLINK-22320 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Jark Wu -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22316) Support MODIFY column/constraint/watermark for ALTER TABLE statement
Jark Wu created FLINK-22316: --- Summary: Support MODIFY column/constraint/watermark for ALTER TABLE statement Key: FLINK-22316 URL: https://issues.apache.org/jira/browse/FLINK-22316 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Jark Wu -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22315) Support ADD column/constraint/watermark for ALTER TABLE statement
Jark Wu created FLINK-22315: --- Summary: Support ADD column/constraint/watermark for ALTER TABLE statement Key: FLINK-22315 URL: https://issues.apache.org/jira/browse/FLINK-22315 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Jark Wu -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22314) AggRecordsCombiner should combine buffered records first instead of accumulate on state directly
Jark Wu created FLINK-22314: --- Summary: AggRecordsCombiner should combine buffered records first instead of accumulate on state directly Key: FLINK-22314 URL: https://issues.apache.org/jira/browse/FLINK-22314 Project: Flink Issue Type: Bug Components: Table SQL / Runtime Reporter: Jark Wu In Window TVF Aggregation, currently, the {{AggRecordsCombiner}} accumulates buffered records on state directly. This is not good for performance. We can accumulate records in memory first, and then merge the accumulator into state, if the aggs support {{merge()}} method. This can reduce lots of state accessing when having {{COUNT DISTINCT}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22312) YARNSessionFIFOSecuredITCase>YARNSessionFIFOITCase.checkForProhibitedLogContents due to the heartbeat exception with Yarn RM
Guowei Ma created FLINK-22312: - Summary: YARNSessionFIFOSecuredITCase>YARNSessionFIFOITCase.checkForProhibitedLogContents due to the heartbeat exception with Yarn RM Key: FLINK-22312 URL: https://issues.apache.org/jira/browse/FLINK-22312 Project: Flink Issue Type: Bug Components: Deployment / YARN Affects Versions: 1.11.3 Reporter: Guowei Ma https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16633=logs=fc5181b0-e452-5c8f-68de-1097947f6483=6b04ca5f-0b52-511d-19c9-52bf0d9fbdfa=26614 {code:java} at org.junit.rules.RunRules.evaluate(RunRules.java:20) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22313) Redundant CAST in plan when selecting window start and window end in window agg
Caizhi Weng created FLINK-22313: --- Summary: Redundant CAST in plan when selecting window start and window end in window agg Key: FLINK-22313 URL: https://issues.apache.org/jira/browse/FLINK-22313 Project: Flink Issue Type: Bug Components: Table SQL / API Affects Versions: 1.13.0 Reporter: Caizhi Weng Add the following test case to {{org.apache.flink.table.planner.plan.stream.sql.agg.WindowAggregateTest}} to reproduce this bug. {code:scala} @Test def testSessionFunction(): Unit = { val sql = """ |SELECT |COUNT(*), |SESSION_START(proctime, INTERVAL '15' MINUTE), |SESSION_END(proctime, INTERVAL '15' MINUTE) |FROM MyTable |GROUP BY SESSION(proctime, INTERVAL '15' MINUTE) """.stripMargin util.verifyExecPlan(sql) } {code} The produced plan is {code} Calc(select=[EXPR$0, CAST(w$start) AS EXPR$1, CAST(w$end) AS EXPR$2]) +- GroupWindowAggregate(window=[SessionGroupWindow('w$, proctime, 90)], properties=[w$start, w$end, w$proctime], select=[COUNT(*) AS EXPR$0, start('w$) AS w$start, end('w$) AS w$end, proctime('w$) AS w$proctime]) +- Exchange(distribution=[single]) +- Calc(select=[proctime]) +- WatermarkAssigner(rowtime=[rowtime], watermark=[(rowtime - 1000:INTERVAL SECOND)]) +- Calc(select=[PROCTIME() AS proctime, rowtime]) +- TableSourceScan(table=[[default_catalog, default_database, MyTable, project=[rowtime]]], fields=[rowtime]) {code} This is because the nullability indicated by {{PlannerWindowStart#getResultType}} and {{SqlGroupedWindowFunction#WindowStartEndReturnTypeInference}} are different. Actually time attribute and window start / end should always be not null. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22311) Flink JDBC XA connector need to set maxRetries to 0 to properly working
Maciej Bryński created FLINK-22311: -- Summary: Flink JDBC XA connector need to set maxRetries to 0 to properly working Key: FLINK-22311 URL: https://issues.apache.org/jira/browse/FLINK-22311 Project: Flink Issue Type: Bug Reporter: Maciej Bryński Hi, We're using XA connector from Flink 1.13 in one of our projects and we were able to create duplicates of records during write to Oracle. The reason was that default MAX_RETRIES in JdbcExecutionOptions is 3 and this can cause duplicates in DB. I think we should at least mention this in docs or even validate this option when creating XA Sink. In documentation we're using defaults. https://github.com/apache/flink/pull/10847/files#diff-a585e56c997756bb7517ebd2424e5fab5813cee67d8dee3eab6ddd0780aff627R88 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22310) The result of LogicalWindowJsonDeserializer is incorrect
godfrey he created FLINK-22310: -- Summary: The result of LogicalWindowJsonDeserializer is incorrect Key: FLINK-22310 URL: https://issues.apache.org/jira/browse/FLINK-22310 Project: Flink Issue Type: Sub-task Components: Table SQL / Planner Reporter: godfrey he Fix For: 1.13.0 The reason is the wrong argument is given in LogicalWindowJsonDeserializer when creating FieldReferenceExpression instance, see line#137 in LogicalWindowJsonDeserializer. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22309) Reduce response time when using SQL Client submit query(SELECT)
Shengkai Fang created FLINK-22309: - Summary: Reduce response time when using SQL Client submit query(SELECT) Key: FLINK-22309 URL: https://issues.apache.org/jira/browse/FLINK-22309 Project: Flink Issue Type: Improvement Components: Table SQL / Client Affects Versions: 1.13.0 Reporter: Shengkai Fang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22308) Fix CliTableauResultView print results after cancel in STREAMING mode
Shengkai Fang created FLINK-22308: - Summary: Fix CliTableauResultView print results after cancel in STREAMING mode Key: FLINK-22308 URL: https://issues.apache.org/jira/browse/FLINK-22308 Project: Flink Issue Type: Bug Components: Table SQL / Client Affects Versions: 1.13.0 Reporter: Shengkai Fang Fix For: 1.13.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [DISCUSS] Releasing Flink 1.12.3
+1 Thanks for driving this, Arvid. Best, Guowei On Fri, Apr 16, 2021 at 5:58 PM Till Rohrmann wrote: > +1. > > Thanks for volunteering Arvid. > > Cheers, > Till > > On Fri, Apr 16, 2021 at 9:50 AM Stephan Ewen wrote: > > > +1 > > > > Thanks for pushing this, Arvid, let's get this fix out asap. > > > > > > > > On Fri, Apr 16, 2021 at 9:46 AM Arvid Heise wrote: > > > > > Dear devs, > > > > > > Since we just fixed a severe bug that causes the dataflow to halt under > > > specific circumstances [1], we would like to release a bugfix asap. > > > > > > I would volunteer as the release manager and kick off the release > process > > > on next Monday (April 19th). > > > What do you think? > > > > > > Note that this time around, I would not wait for any specific > > > fixes/backports. However, you can still merge all fixes that you'd like > > to > > > see in 1.12.3 until Monday. > > > > > > Btw the fix is already in master and will be directly applied to the > next > > > RC of 1.13.0. Flink version 1.11.x and older are not affected. > > > > > > [1] https://issues.apache.org/jira/browse/FLINK-21992 > > > > > >
Re: [DISCUSS] Releasing Flink 1.12.3
+1. Thanks for volunteering Arvid. Cheers, Till On Fri, Apr 16, 2021 at 9:50 AM Stephan Ewen wrote: > +1 > > Thanks for pushing this, Arvid, let's get this fix out asap. > > > > On Fri, Apr 16, 2021 at 9:46 AM Arvid Heise wrote: > > > Dear devs, > > > > Since we just fixed a severe bug that causes the dataflow to halt under > > specific circumstances [1], we would like to release a bugfix asap. > > > > I would volunteer as the release manager and kick off the release process > > on next Monday (April 19th). > > What do you think? > > > > Note that this time around, I would not wait for any specific > > fixes/backports. However, you can still merge all fixes that you'd like > to > > see in 1.12.3 until Monday. > > > > Btw the fix is already in master and will be directly applied to the next > > RC of 1.13.0. Flink version 1.11.x and older are not affected. > > > > [1] https://issues.apache.org/jira/browse/FLINK-21992 > > >
[jira] [Created] (FLINK-22307) Increase the data writing cache size of sort-merge blocking shuffle
Yingjie Cao created FLINK-22307: --- Summary: Increase the data writing cache size of sort-merge blocking shuffle Key: FLINK-22307 URL: https://issues.apache.org/jira/browse/FLINK-22307 Project: Flink Issue Type: Sub-task Components: Runtime / Network Reporter: Yingjie Cao Fix For: 1.13.0 Currently, the data writing cache is 8M, which is not enough if data compression is enabled. By increasing the cache size to 16M, the performance of our benchmark job can be increased by about 20%. (We may make it configurable in the future) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22306) KafkaITCase.testCollectingSchema failed on AZP
Till Rohrmann created FLINK-22306: - Summary: KafkaITCase.testCollectingSchema failed on AZP Key: FLINK-22306 URL: https://issues.apache.org/jira/browse/FLINK-22306 Project: Flink Issue Type: Bug Components: Connectors / Kafka Affects Versions: 1.13.0 Reporter: Till Rohrmann Fix For: 1.13.0 The {{KafkaITCase.testCollectingSchema}} failed on AZP with {code} 2021-04-15T10:22:06.8263865Z org.apache.flink.runtime.client.JobExecutionException: Job execution failed. 2021-04-15T10:22:06.8266577Zat org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144) 2021-04-15T10:22:06.8267526Zat org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137) 2021-04-15T10:22:06.8268034Zat java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) 2021-04-15T10:22:06.8268496Zat java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) 2021-04-15T10:22:06.8269133Zat java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 2021-04-15T10:22:06.8270205Zat java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 2021-04-15T10:22:06.8270698Zat org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237) 2021-04-15T10:22:06.8271192Zat java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) 2021-04-15T10:22:06.8274903Zat java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) 2021-04-15T10:22:06.8275602Zat java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 2021-04-15T10:22:06.8276139Zat java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) 2021-04-15T10:22:06.8276589Zat org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081) 2021-04-15T10:22:06.8276965Zat akka.dispatch.OnComplete.internal(Future.scala:264) 2021-04-15T10:22:06.8277307Zat akka.dispatch.OnComplete.internal(Future.scala:261) 2021-04-15T10:22:06.8277634Zat akka.dispatch.japi$CallbackBridge.apply(Future.scala:191) 2021-04-15T10:22:06.8277971Zat akka.dispatch.japi$CallbackBridge.apply(Future.scala:188) 2021-04-15T10:22:06.8278352Zat scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) 2021-04-15T10:22:06.8278767Zat org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73) 2021-04-15T10:22:06.8279223Zat scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) 2021-04-15T10:22:06.8279743Zat scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) 2021-04-15T10:22:06.8280130Zat akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572) 2021-04-15T10:22:06.8280561Zat akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22) 2021-04-15T10:22:06.8287231Zat akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21) 2021-04-15T10:22:06.8291223Zat scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436) 2021-04-15T10:22:06.8291779Zat scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435) 2021-04-15T10:22:06.8292745Zat scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) 2021-04-15T10:22:06.8293335Zat akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) 2021-04-15T10:22:06.8294000Zat akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) 2021-04-15T10:22:06.8294702Zat akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) 2021-04-15T10:22:06.8295281Zat akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) 2021-04-15T10:22:06.8295905Zat scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) 2021-04-15T10:22:06.8296412Zat akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) 2021-04-15T10:22:06.8296799Zat akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) 2021-04-15T10:22:06.8297353Zat akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44) 2021-04-15T10:22:06.8297805Zat akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 2021-04-15T10:22:06.8298253Zat akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 2021-04-15T10:22:06.8298647Zat akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 2021-04-15T10:22:06.8299217Zat akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[jira] [Created] (FLINK-22305) Increase the default value of taskmanager.network.sort-shuffle.min-buffers
Yingjie Cao created FLINK-22305: --- Summary: Increase the default value of taskmanager.network.sort-shuffle.min-buffers Key: FLINK-22305 URL: https://issues.apache.org/jira/browse/FLINK-22305 Project: Flink Issue Type: Sub-task Components: Runtime / Network Reporter: Yingjie Cao Fix For: 1.13.0 Currently, the default value of taskmanager.network.sort-shuffle.min-buffers is 64, which is pretty small. As suggested, we'd like to increase the default value of taskmanager.network.sort-shuffle.min-buffers. By increasing the default taskmanager.network.sort-shuffle.min-buffers, the corner case of very small in-memory sort-buffer and write-buffer can be avoid, which is better for performance. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22304) Refactor some interfaces for TVF based window to improve the scalability
Shuo Cheng created FLINK-22304: -- Summary: Refactor some interfaces for TVF based window to improve the scalability Key: FLINK-22304 URL: https://issues.apache.org/jira/browse/FLINK-22304 Project: Flink Issue Type: Improvement Components: Table SQL / API Affects Versions: 1.13.0 Reporter: Shuo Cheng Fix For: 1.13.1 Refactoring `WindowBuffer` and `WindowCombineFunction` to make the implementation more scalable -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [DISCUSS] Releasing Flink 1.12.3
+1 Thanks for pushing this, Arvid, let's get this fix out asap. On Fri, Apr 16, 2021 at 9:46 AM Arvid Heise wrote: > Dear devs, > > Since we just fixed a severe bug that causes the dataflow to halt under > specific circumstances [1], we would like to release a bugfix asap. > > I would volunteer as the release manager and kick off the release process > on next Monday (April 19th). > What do you think? > > Note that this time around, I would not wait for any specific > fixes/backports. However, you can still merge all fixes that you'd like to > see in 1.12.3 until Monday. > > Btw the fix is already in master and will be directly applied to the next > RC of 1.13.0. Flink version 1.11.x and older are not affected. > > [1] https://issues.apache.org/jira/browse/FLINK-21992 >
[DISCUSS] Releasing Flink 1.12.3
Dear devs, Since we just fixed a severe bug that causes the dataflow to halt under specific circumstances [1], we would like to release a bugfix asap. I would volunteer as the release manager and kick off the release process on next Monday (April 19th). What do you think? Note that this time around, I would not wait for any specific fixes/backports. However, you can still merge all fixes that you'd like to see in 1.12.3 until Monday. Btw the fix is already in master and will be directly applied to the next RC of 1.13.0. Flink version 1.11.x and older are not affected. [1] https://issues.apache.org/jira/browse/FLINK-21992
[jira] [Created] (FLINK-22303) FlinkRelMdFilteredColumnInterval should remapping the columnIndex of the inputRel otherwise may cause IllegalArgumentException or get incorrectly metadata
lincoln lee created FLINK-22303: --- Summary: FlinkRelMdFilteredColumnInterval should remapping the columnIndex of the inputRel otherwise may cause IllegalArgumentException or get incorrectly metadata Key: FLINK-22303 URL: https://issues.apache.org/jira/browse/FLINK-22303 Project: Flink Issue Type: Bug Components: Table SQL / Planner Affects Versions: 1.12.2 Reporter: lincoln lee FlinkRelMdFilteredColumnInterval should remapping the columnIndex of the inputRel otherwise may cause IllegalArgumentException or get incorrectly metadata. The following case will get an `IllegalArgumentException` {code} @Test def testFilteredColumnIntervalValidation(): Unit = { util.verifyExecPlan( s""" |select | sum(uv) filter (where c = 'all') as all_uv |from ( | select | c, count(1) as uv | from T | group by c |) t |""".stripMargin) } {code} {code} Caused by: java.lang.IllegalArgumentExceptionCaused by: java.lang.IllegalArgumentException at org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:122) at org.apache.flink.table.planner.plan.stats.ValueInterval$.compare(ValueInterval.scala:290) at org.apache.flink.table.planner.plan.stats.ValueInterval$.compareAndHandle(ValueInterval.scala:304) at org.apache.flink.table.planner.plan.stats.ValueInterval$.isIntersected(ValueInterval.scala:247) at org.apache.flink.table.planner.plan.stats.ValueInterval$.intersect(ValueInterval.scala:189) at org.apache.flink.table.planner.plan.utils.ColumnIntervalUtil$$anonfun$5$$anonfun$8.apply(ColumnIntervalUtil.scala:226) at org.apache.flink.table.planner.plan.utils.ColumnIntervalUtil$$anonfun$5$$anonfun$8.apply(ColumnIntervalUtil.scala:226) at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57) at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66) at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48) at org.apache.flink.table.planner.plan.utils.ColumnIntervalUtil$$anonfun$5.apply(ColumnIntervalUtil.scala:226) at org.apache.flink.table.planner.plan.utils.ColumnIntervalUtil$$anonfun$5.apply(ColumnIntervalUtil.scala:221) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at org.apache.flink.table.planner.plan.utils.ColumnIntervalUtil$.getColumnIntervalWithFilter(ColumnIntervalUtil.scala:221) at org.apache.flink.table.planner.plan.metadata.FlinkRelMdFilteredColumnInterval.getFilteredColumnInterval(FlinkRelMdFilteredColumnInterval.scala:137) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22302) Restructure SQL "Queries" documentation into one page per operation
Jark Wu created FLINK-22302: --- Summary: Restructure SQL "Queries" documentation into one page per operation Key: FLINK-22302 URL: https://issues.apache.org/jira/browse/FLINK-22302 Project: Flink Issue Type: Task Components: Documentation, Table SQL / API Reporter: Jark Wu Assignee: Jark Wu Fix For: 1.13.0 Currently, the "Queries" page has been very large and it's getting longger when we supporting more features. We already have separate pages for Joins and CEP. I would propose to separate "Queries" into one page per operation. This way we can easily add more detailed informations for the operations and more examples. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-22301) Statebackend and CheckpointStorage type is not shown in the Web UI
Yun Gao created FLINK-22301: --- Summary: Statebackend and CheckpointStorage type is not shown in the Web UI Key: FLINK-22301 URL: https://issues.apache.org/jira/browse/FLINK-22301 Project: Flink Issue Type: Improvement Components: Runtime / Checkpointing, Runtime / Web Frontend Affects Versions: 1.13.0 Reporter: Yun Gao -- This message was sent by Atlassian Jira (v8.3.4#803005)