[jira] [Closed] (FLINK-30132) Test LocalRecoveryITCase#testRecoverLocallyFromProcessCrashWithWorkingDirectory failed on azure due to File not exists

2023-11-28 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl closed FLINK-30132.
-
Resolution: Duplicate

Closing FLINK-30132 in favor of FLINK-33641 because there's more discussion 
going on but it seems to cover the same issue.

> Test 
> LocalRecoveryITCase#testRecoverLocallyFromProcessCrashWithWorkingDirectory 
> failed on azure due to File not exists
> --
>
> Key: FLINK-30132
> URL: https://issues.apache.org/jira/browse/FLINK-30132
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.16.0, 1.19.0
>Reporter: Leonard Xu
>Priority: Major
>  Labels: test-stability
>
> {noformat}
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at 
> java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.next(FileTreeWalker.java:372)
>   at java.nio.file.Files.walkFileTree(Files.java:2706)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath.deleteAllFilesAndDirectories(TempDirectory.java:199)
>   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath.close(TempDirectory.java:186)
>   ... 51 more
>   Suppressed: java.nio.file.NoSuchFileException: 
> /tmp/junit2010448393472419340/tm_taskManager_2/localState/aid_21c128b018cc61989c323cda6e36b0b1/jid_e5dbf7bc4ebb72baf20387e555083439/vtx_bc764cd8ddf7a0cff126f51c16239658_sti_1
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)
>   at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
>   at java.nio.file.Files.delete(Files.java:1126)
>   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath$1.resetPermissionsAndTryToDeleteAgain(TempDirectory.java:250)
>   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath$1.visitFileFailed(TempDirectory.java:212)
>   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath$1.visitFileFailed(TempDirectory.java:199)
>   at java.nio.file.Files.walkFileTree(Files.java:2672)
>   ... 54 more
> Nov 21 19:52:57 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 15.971 s <<< FAILURE! - in 
> org.apache.flink.test.recovery.LocalRecoveryITCase
> Nov 21 19:52:57 [ERROR] 
> org.apache.flink.test.recovery.LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory
>   Time elapsed: 15.942 s  <<< ERROR!
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43366=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33414) MiniClusterITCase.testHandleStreamingJobsWhenNotEnoughSlot fails due to unexpected TimeoutException

2023-11-28 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790953#comment-17790953
 ] 

Matthias Pohl commented on FLINK-33414:
---

https://github.com/XComp/flink/actions/runs/7019921193/job/19098991758#step:12:8972

> MiniClusterITCase.testHandleStreamingJobsWhenNotEnoughSlot fails due to 
> unexpected TimeoutException
> ---
>
> Key: FLINK-33414
> URL: https://issues.apache.org/jira/browse/FLINK-33414
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: github-actions, test-stability
>
> We see this test instability in [this 
> build|https://github.com/XComp/flink/actions/runs/6695266358/job/18192039035#step:12:9253].
> {code:java}
> Error: 17:04:52 17:04:52.042 [ERROR] Failures: 
> 9252Error: 17:04:52 17:04:52.042 [ERROR]   
> MiniClusterITCase.testHandleStreamingJobsWhenNotEnoughSlot:120 
> 9253Oct 30 17:04:52 Expecting a throwable with root cause being an instance 
> of:
> 9254Oct 30 17:04:52   
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException
> 9255Oct 30 17:04:52 but was an instance of:
> 9256Oct 30 17:04:52   java.util.concurrent.TimeoutException: Timeout has 
> occurred: 100 ms
> 9257Oct 30 17:04:52   at 
> org.apache.flink.runtime.jobmaster.slotpool.PhysicalSlotRequestBulkCheckerImpl.lambda$schedulePendingRequestBulkWithTimestampCheck$0(PhysicalSlotRequestBulkCheckerImpl.java:86)
> 9258Oct 30 17:04:52   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> 9259Oct 30 17:04:52   at 
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> 9260Oct 30 17:04:52   ...(27 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed) {code}
> The same error occurred in the [finegrained_resourcemanager stage of this 
> build|https://github.com/XComp/flink/actions/runs/6468655160/job/17563927249#step:11:26516]
>  (as reported in FLINK-33245).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32006) AsyncWaitOperatorTest.testProcessingTimeWithTimeoutFunctionOrderedWithRetry times out on Azure

2023-11-28 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790951#comment-17790951
 ] 

Matthias Pohl commented on FLINK-32006:
---

FLINK-27075-related: 
https://github.com/XComp/flink/actions/runs/7019921193/job/19099004364#step:12:9613

> AsyncWaitOperatorTest.testProcessingTimeWithTimeoutFunctionOrderedWithRetry 
> times out on Azure
> --
>
> Key: FLINK-32006
> URL: https://issues.apache.org/jira/browse/FLINK-32006
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.18.0, 1.19.0
>Reporter: David Morávek
>Assignee: David Morávek
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
>
> {code:java}
> May 04 13:52:18 [ERROR] 
> org.apache.flink.streaming.api.operators.async.AsyncWaitOperatorTest.testProcessingTimeWithTimeoutFunctionOrderedWithRetry
>   Time elapsed: 100.009 s  <<< ERROR!
> May 04 13:52:18 org.junit.runners.model.TestTimedOutException: test timed out 
> after 100 seconds
> May 04 13:52:18   at java.lang.Thread.sleep(Native Method)
> May 04 13:52:18   at 
> org.apache.flink.streaming.api.operators.async.AsyncWaitOperatorTest.testProcessingTimeAlwaysTimeoutFunctionWithRetry(AsyncWaitOperatorTest.java:1313)
> May 04 13:52:18   at 
> org.apache.flink.streaming.api.operators.async.AsyncWaitOperatorTest.testProcessingTimeWithTimeoutFunctionOrderedWithRetry(AsyncWaitOperatorTest.java:1277)
> May 04 13:52:18   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 04 13:52:18   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 04 13:52:18   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 04 13:52:18   at java.lang.reflect.Method.invoke(Method.java:498)
> May 04 13:52:18   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> May 04 13:52:18   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 04 13:52:18   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> May 04 13:52:18   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 04 13:52:18   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> May 04 13:52:18   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> May 04 13:52:18   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> May 04 13:52:18   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 04 13:52:18   at java.lang.Thread.run(Thread.java:748)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=48671=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=9288



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33690) build_wheels_on_macos sometimes fails with read timed out on AZP

2023-11-28 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-33690:
---

 Summary: build_wheels_on_macos sometimes fails with read timed out 
on AZP
 Key: FLINK-33690
 URL: https://issues.apache.org/jira/browse/FLINK-33690
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.18.0
Reporter: Sergey Nuyanzin


This build 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55013=logs=f73b5736-8355-5390-ec71-4dfdec0ce6c5=90f7230e-bf5a-531b-8566-ad48d3e03bbb=217
failed as
{noformat}
...
File 
"/private/var/folders/3s/vfzpb5r51gs6y328rmlgzm7cgn/T/pip-standalone-pip-3jt0u_hl/__env_pip__.zip/pip/_vendor/urllib3/response.py",
 line 573, in stream
  data = self.read(amt=amt, decode_content=decode_content)
File 
"/private/var/folders/3s/vfzpb5r51gs6y328rmlgzm7cgn/T/pip-standalone-pip-3jt0u_hl/__env_pip__.zip/pip/_vendor/urllib3/response.py",
 line 538, in read
  raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File 
"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/contextlib.py",
 line 131, in __exit__
  self.gen.throw(type, value, traceback)
File 
"/private/var/folders/3s/vfzpb5r51gs6y328rmlgzm7cgn/T/pip-standalone-pip-3jt0u_hl/__env_pip__.zip/pip/_vendor/urllib3/response.py",
 line 440, in _error_catcher
  raise ReadTimeoutError(self._pool, None, "Read timed out.")
  pip._vendor.urllib3.exceptions.ReadTimeoutError: 
HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.

{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33641) JUnit5 fails to delete a directory on AZP for various table-planner tests

2023-11-28 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33641:

Summary: JUnit5 fails to delete a directory on AZP for various 
table-planner tests  (was: RankITCase.testMultipleUnaryTopNAfterAgg fails on 
AZP)

> JUnit5 fails to delete a directory on AZP for various table-planner tests
> -
>
> Key: FLINK-33641
> URL: https://issues.apache.org/jira/browse/FLINK-33641
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
>
> this build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=54856=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11289
> fails with 
> {noformat}
> Nov 24 02:21:53   Suppressed: java.nio.file.DirectoryNotEmptyException: 
> /tmp/junit1727687356898183357/junit4798298549994985259/1ac07a5866d81240870d5a2982531508
> Nov 24 02:21:53   at 
> sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242)
> Nov 24 02:21:53   at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
> Nov 24 02:21:53   at java.nio.file.Files.delete(Files.java:1126)
> Nov 24 02:21:53   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath$1.deleteAndContinue(TempDirectory.java:293)
> Nov 24 02:21:53   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath$1.postVisitDirectory(TempDirectory.java:288)
> Nov 24 02:21:53   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath$1.postVisitDirectory(TempDirectory.java:264)
> Nov 24 02:21:53   at 
> java.nio.file.Files.walkFileTree(Files.java:2688)
> Nov 24 02:21:53   at 
> java.nio.file.Files.walkFileTree(Files.java:2742)
> Nov 24 02:21:53   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath.deleteAllFilesAndDirectories(TempDirectory.java:264)
> Nov 24 02:21:53   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath.close(TempDirectory.java:249)
> Nov 24 02:21:53   ... 92 more
> {noformat}
> not sure however this might be related to recent JUnit4 => JUnit5 upgrade



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33689][table-runtime] Fix JsonObjectAggFunction can't retract records when enabling LocalGlobal. [flink]

2023-11-28 Thread via GitHub


flinkbot commented on PR #23827:
URL: https://github.com/apache/flink/pull/23827#issuecomment-1831347603

   
   ## CI report:
   
   * 14941540a4d1c8b4e6b278bc0f30dd14f3d27515 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33689) JsonObjectAggFunction can't retract previous data which is invalid when enabling local global agg

2023-11-28 Thread Shuai Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuai Xu updated FLINK-33689:
-
Summary: JsonObjectAggFunction can't retract previous data which is invalid 
when enabling local global agg  (was: JsonObjectAggFunction can't retract 
previous data which is invalid when enable local global agg)

> JsonObjectAggFunction can't retract previous data which is invalid when 
> enabling local global agg
> -
>
> Key: FLINK-33689
> URL: https://issues.apache.org/jira/browse/FLINK-33689
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0
>Reporter: Shuai Xu
>Priority: Major
>  Labels: pull-request-available
>
> Run the test as following and enable LocalGlobal in sql/AggregateITCase . 
> {code:java}
> def testGroupJsonObjectAggWithRetract(): Unit = {
>   val data = new mutable.MutableList[(Long, String, Long)]
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   val sql =
> s"""
>|select
>|   JSON_OBJECTAGG(key k value v)
>|from (select
>| cast(SUM(a) as string) as k,t as v
>|   from
>| Table6
>|   group by t)
>|""".stripMargin
>   val t = failingDataSource(data).toTable(tEnv, 'a, 'c, 't)
>   tEnv.createTemporaryView("Table6", t)
>   val sink = new TestingRetractSink
>   tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink).setParallelism(1)
>   env.execute()
>   val expected =
> List(
>   "{\"30\":2}"
> )
>   assertThat(sink.getRetractResults).isEqualTo(expected)
> } {code}
> The result is as following.
> {code:java}
> List({"14":2,"30":2}) {code}
> However,  \{"14":2}  should be retracted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33689) JsonObjectAggFunction can't retract previous data which is invalid when enable local global agg

2023-11-28 Thread Shuai Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuai Xu updated FLINK-33689:
-
Summary: JsonObjectAggFunction can't retract previous data which is invalid 
when enable local global agg  (was: jsonObjectAggFunction can't retract 
previous data which is invalid when enable local global agg)

> JsonObjectAggFunction can't retract previous data which is invalid when 
> enable local global agg
> ---
>
> Key: FLINK-33689
> URL: https://issues.apache.org/jira/browse/FLINK-33689
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0
>Reporter: Shuai Xu
>Priority: Major
>  Labels: pull-request-available
>
> Run the test as following and enable LocalGlobal in sql/AggregateITCase . 
> {code:java}
> def testGroupJsonObjectAggWithRetract(): Unit = {
>   val data = new mutable.MutableList[(Long, String, Long)]
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   val sql =
> s"""
>|select
>|   JSON_OBJECTAGG(key k value v)
>|from (select
>| cast(SUM(a) as string) as k,t as v
>|   from
>| Table6
>|   group by t)
>|""".stripMargin
>   val t = failingDataSource(data).toTable(tEnv, 'a, 'c, 't)
>   tEnv.createTemporaryView("Table6", t)
>   val sink = new TestingRetractSink
>   tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink).setParallelism(1)
>   env.execute()
>   val expected =
> List(
>   "{\"30\":2}"
> )
>   assertThat(sink.getRetractResults).isEqualTo(expected)
> } {code}
> The result is as following.
> {code:java}
> List({"14":2,"30":2}) {code}
> However,  \{"14":2}  should be retracted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33689) jsonObjectAggFunction can't retract previous data which is invalid when enable local global agg

2023-11-28 Thread Shuai Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuai Xu updated FLINK-33689:
-
Description: 
Run the test as following and enable LocalGlobal and minibatch  in 
sql/AggregateITCase . 
{code:java}
def testGroupJsonObjectAggWithRetract(): Unit = {
  val data = new mutable.MutableList[(Long, String, Long)]
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  val sql =
s"""
   |select
   |   JSON_OBJECTAGG(key k value v)
   |from (select
   | cast(SUM(a) as string) as k,t as v
   |   from
   | Table6
   |   group by t)
   |""".stripMargin
  val t = failingDataSource(data).toTable(tEnv, 'a, 'c, 't)
  tEnv.createTemporaryView("Table6", t)
  val sink = new TestingRetractSink
  tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink).setParallelism(1)
  env.execute()
  val expected =
List(
  "{\"30\":2}"
)
  assertThat(sink.getRetractResults).isEqualTo(expected)
} {code}
The result is as following.
{code:java}
List({"14":2,"30":2}) {code}
However,  \{"14":2}  should be retracted.

  was:
Run the test as following and enable LocalGlobal and minibatch  in 
sql/AggregateITCase . 
{code:java}
//代码占位符
def testGroupJsonObjectAggWithRetract(): Unit = {
  val data = new mutable.MutableList[(Long, String, Long)]
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  val sql =
s"""
   |select
   |   JSON_OBJECTAGG(key k value v)
   |from (select
   | cast(SUM(a) as string) as k,t as v
   |   from
   | Table6
   |   group by t)
   |""".stripMargin
  val t = failingDataSource(data).toTable(tEnv, 'a, 'c, 't)
  tEnv.createTemporaryView("Table6", t)
  val sink = new TestingRetractSink
  tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink).setParallelism(1)
  env.execute()
  val expected =
List(
  "{\"30\":2}"
)
  assertThat(sink.getRetractResults).isEqualTo(expected)
} {code}
The result is as following.
{code:java}
//代码占位符
List({"14":2,"30":2}) {code}
However,  \{"14":2}  should be retracted.


> jsonObjectAggFunction can't retract previous data which is invalid when 
> enable local global agg
> ---
>
> Key: FLINK-33689
> URL: https://issues.apache.org/jira/browse/FLINK-33689
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0
>Reporter: Shuai Xu
>Priority: Major
>  Labels: pull-request-available
>
> Run the test as following and enable LocalGlobal and minibatch  in 
> sql/AggregateITCase . 
> {code:java}
> def testGroupJsonObjectAggWithRetract(): Unit = {
>   val data = new mutable.MutableList[(Long, String, Long)]
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   val sql =
> s"""
>|select
>|   JSON_OBJECTAGG(key k value v)
>|from (select
>| cast(SUM(a) as string) as k,t as v
>|   from
>| Table6
>|   group by t)
>|""".stripMargin
>   val t = failingDataSource(data).toTable(tEnv, 'a, 'c, 't)
>   tEnv.createTemporaryView("Table6", t)
>   val sink = new TestingRetractSink
>   tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink).setParallelism(1)
>   env.execute()
>   val expected =
> List(
>   "{\"30\":2}"
> )
>   assertThat(sink.getRetractResults).isEqualTo(expected)
> } {code}
> The result is as following.
> {code:java}
> List({"14":2,"30":2}) {code}
> However,  \{"14":2}  should be retracted.



--
This message was sent by Atlassian Jira

[jira] [Updated] (FLINK-33689) jsonObjectAggFunction can't retract previous data which is invalid when enable local global agg

2023-11-28 Thread Shuai Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuai Xu updated FLINK-33689:
-
Description: 
Run the test as following and enable LocalGlobal in sql/AggregateITCase . 
{code:java}
def testGroupJsonObjectAggWithRetract(): Unit = {
  val data = new mutable.MutableList[(Long, String, Long)]
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  val sql =
s"""
   |select
   |   JSON_OBJECTAGG(key k value v)
   |from (select
   | cast(SUM(a) as string) as k,t as v
   |   from
   | Table6
   |   group by t)
   |""".stripMargin
  val t = failingDataSource(data).toTable(tEnv, 'a, 'c, 't)
  tEnv.createTemporaryView("Table6", t)
  val sink = new TestingRetractSink
  tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink).setParallelism(1)
  env.execute()
  val expected =
List(
  "{\"30\":2}"
)
  assertThat(sink.getRetractResults).isEqualTo(expected)
} {code}
The result is as following.
{code:java}
List({"14":2,"30":2}) {code}
However,  \{"14":2}  should be retracted.

  was:
Run the test as following and enable LocalGlobal and minibatch  in 
sql/AggregateITCase . 
{code:java}
def testGroupJsonObjectAggWithRetract(): Unit = {
  val data = new mutable.MutableList[(Long, String, Long)]
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  val sql =
s"""
   |select
   |   JSON_OBJECTAGG(key k value v)
   |from (select
   | cast(SUM(a) as string) as k,t as v
   |   from
   | Table6
   |   group by t)
   |""".stripMargin
  val t = failingDataSource(data).toTable(tEnv, 'a, 'c, 't)
  tEnv.createTemporaryView("Table6", t)
  val sink = new TestingRetractSink
  tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink).setParallelism(1)
  env.execute()
  val expected =
List(
  "{\"30\":2}"
)
  assertThat(sink.getRetractResults).isEqualTo(expected)
} {code}
The result is as following.
{code:java}
List({"14":2,"30":2}) {code}
However,  \{"14":2}  should be retracted.


> jsonObjectAggFunction can't retract previous data which is invalid when 
> enable local global agg
> ---
>
> Key: FLINK-33689
> URL: https://issues.apache.org/jira/browse/FLINK-33689
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0
>Reporter: Shuai Xu
>Priority: Major
>  Labels: pull-request-available
>
> Run the test as following and enable LocalGlobal in sql/AggregateITCase . 
> {code:java}
> def testGroupJsonObjectAggWithRetract(): Unit = {
>   val data = new mutable.MutableList[(Long, String, Long)]
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   val sql =
> s"""
>|select
>|   JSON_OBJECTAGG(key k value v)
>|from (select
>| cast(SUM(a) as string) as k,t as v
>|   from
>| Table6
>|   group by t)
>|""".stripMargin
>   val t = failingDataSource(data).toTable(tEnv, 'a, 'c, 't)
>   tEnv.createTemporaryView("Table6", t)
>   val sink = new TestingRetractSink
>   tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink).setParallelism(1)
>   env.execute()
>   val expected =
> List(
>   "{\"30\":2}"
> )
>   assertThat(sink.getRetractResults).isEqualTo(expected)
> } {code}
> The result is as following.
> {code:java}
> List({"14":2,"30":2}) {code}
> However,  \{"14":2}  should be retracted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33641) RankITCase.testMultipleUnaryTopNAfterAgg fails on AZP

2023-11-28 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790938#comment-17790938
 ] 

Sergey Nuyanzin commented on FLINK-33641:
-

same case with another test 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55012=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94

> RankITCase.testMultipleUnaryTopNAfterAgg fails on AZP
> -
>
> Key: FLINK-33641
> URL: https://issues.apache.org/jira/browse/FLINK-33641
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
>
> this build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=54856=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11289
> fails with 
> {noformat}
> Nov 24 02:21:53   Suppressed: java.nio.file.DirectoryNotEmptyException: 
> /tmp/junit1727687356898183357/junit4798298549994985259/1ac07a5866d81240870d5a2982531508
> Nov 24 02:21:53   at 
> sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:242)
> Nov 24 02:21:53   at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
> Nov 24 02:21:53   at java.nio.file.Files.delete(Files.java:1126)
> Nov 24 02:21:53   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath$1.deleteAndContinue(TempDirectory.java:293)
> Nov 24 02:21:53   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath$1.postVisitDirectory(TempDirectory.java:288)
> Nov 24 02:21:53   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath$1.postVisitDirectory(TempDirectory.java:264)
> Nov 24 02:21:53   at 
> java.nio.file.Files.walkFileTree(Files.java:2688)
> Nov 24 02:21:53   at 
> java.nio.file.Files.walkFileTree(Files.java:2742)
> Nov 24 02:21:53   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath.deleteAllFilesAndDirectories(TempDirectory.java:264)
> Nov 24 02:21:53   at 
> org.junit.jupiter.engine.extension.TempDirectory$CloseablePath.close(TempDirectory.java:249)
> Nov 24 02:21:53   ... 92 more
> {noformat}
> not sure however this might be related to recent JUnit4 => JUnit5 upgrade



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33668) Decoupling Shuffle network memory and job topology

2023-11-28 Thread Jiang Xin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiang Xin updated FLINK-33668:
--
Description: 
With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network memory 
and the parallelism of tasks by limiting the number of buffers for each 
InputGate and ResultPartition in Hybrid Shuffle. However, when too many shuffle 
tasks are running simultaneously on the same TaskManager, "Insufficient number 
of network buffers" errors would still occur. This usually happens when Slot 
Sharing Group is enabled or a TaskManager contains multiple slots.

We want to make sure that the TaskManager does not encounter "Insufficient 
number of network buffers" even if there are dozens of InputGates and 
ResultPartitions running on the same TaskManager simultaneously. I have given 
this some thought, and here is my rough proposal.

1. InputGate or ResultPartition only apply for buffers from LocalBufferPool, 
which means that InputGate will no longer ask for exclusive buffers from 
NetworkBufferPool directly.
2. When creating a LocalBufferPool, we need to specify the maximum, minimum, 
and expected number of buffers. Whenever a new LBP is created or destroyed, a 
redistribution will occur, to adjust the buffer count of all LocalBufferPools 
using the expected value as a weight and between the minimum and maximum 
values. According to the test, the minimum value can be set to 4 to make the 
Flink Job work despite the possibility of lower performance. With this minimum 
value, a task with 20 shuffle edges needs only 5MB of memory to avoid 
"insufficient network buffer" error.
3. During runtime, InputGate and ResultPartition both calculate the number of 
buffers used by their internal data structures based on the pool size of their 
corresponding LocalBufferPool, such as the exclusive buffer queue of InputGate 
and BufferAccumulator of ResultPartition.

  was:
With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network memory 
and the parallelism of tasks by limiting the number of buffers for each 
InputGate and ResultPartition in Hybrid Shuffle. However, when too many shuffle 
tasks are running simultaneously on the same TaskManager, "Insufficient number 
of network buffers" errors would still occur. This usually happens when Slot 
Sharing Group is enabled or a TaskManager contains multiple slots.

We want to make sure that the TaskManager does not encounter "Insufficient 
number of network buffers" even if there are dozens of InputGates and 
ResultPartitions running on the same TaskManager simultaneously. I have given 
this some thought, and here is my rough proposal.

1. InputGate or ResultPartition only apply for buffers from LocalBufferPool, 
which means that InputGate will no longer ask for exclusive buffers from 
NetworkBufferPool directly.
2. When creating a LocalBufferPool, we need to specify the maximum, minimum, 
and expected number of buffers. Whenever a new LBP is created or destroyed, a 
redistribution will occur, to adjust the buffer count of all LocalBufferPools 
using the expected value as a weight and between the minimum and maximum 
values. According to the test, the minimum value can be set to 4 to make the 
Flink Job work despite the possibility of lower performance. With this minimum 
value, a task with 20 shuffle edge needs only 5MB memory to avoid insufficient 
network buffers error.
3. During runtime, InputGate and ResultPartition both calculate the number of 
buffers used by their internal data structures based on the pool size of their 
corresponding LocalBufferPool, such as exclusive buffer queue of InputGate and 
BufferAccumulator of ResultPartition.


> Decoupling Shuffle network memory and job topology
> --
>
> Key: FLINK-33668
> URL: https://issues.apache.org/jira/browse/FLINK-33668
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Jiang Xin
>Priority: Major
> Fix For: 1.19.0
>
>
> With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network 
> memory and the parallelism of tasks by limiting the number of buffers for 
> each InputGate and ResultPartition in Hybrid Shuffle. However, when too many 
> shuffle tasks are running simultaneously on the same TaskManager, 
> "Insufficient number of network buffers" errors would still occur. This 
> usually happens when Slot Sharing Group is enabled or a TaskManager contains 
> multiple slots.
> We want to make sure that the TaskManager does not encounter "Insufficient 
> number of network buffers" even if there are dozens of InputGates and 
> ResultPartitions running on the same TaskManager simultaneously. I have given 
> this some thought, and here is my rough proposal.
> 1. InputGate or ResultPartition only apply for buffers from 

[jira] [Commented] (FLINK-30719) flink-runtime-web failed due to a corrupted nodejs dependency

2023-11-28 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790937#comment-17790937
 ] 

Sergey Nuyanzin commented on FLINK-30719:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=54999=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb=9645

> flink-runtime-web failed due to a corrupted nodejs dependency
> -
>
> Key: FLINK-30719
> URL: https://issues.apache.org/jira/browse/FLINK-30719
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend, Test Infrastructure, Tests
>Affects Versions: 1.16.0, 1.17.0, 1.18.0
>Reporter: Matthias Pohl
>Assignee: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=44954=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb=12550
> The build failed due to a corrupted nodejs dependency:
> {code}
> [ERROR] The archive file 
> /__w/1/.m2/repository/com/github/eirslett/node/16.13.2/node-16.13.2-linux-x64.tar.gz
>  is corrupted and will be deleted. Please try the build again.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33689) jsonObjectAggFunction can't retract previous data which is invalid when enable local global agg

2023-11-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33689:
---
Labels: pull-request-available  (was: )

> jsonObjectAggFunction can't retract previous data which is invalid when 
> enable local global agg
> ---
>
> Key: FLINK-33689
> URL: https://issues.apache.org/jira/browse/FLINK-33689
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0
>Reporter: Shuai Xu
>Priority: Major
>  Labels: pull-request-available
>
> Run the test as following and enable LocalGlobal and minibatch  in 
> sql/AggregateITCase . 
> {code:java}
> //代码占位符
> def testGroupJsonObjectAggWithRetract(): Unit = {
>   val data = new mutable.MutableList[(Long, String, Long)]
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   data.+=((2L, "Hallo", 2L))
>   val sql =
> s"""
>|select
>|   JSON_OBJECTAGG(key k value v)
>|from (select
>| cast(SUM(a) as string) as k,t as v
>|   from
>| Table6
>|   group by t)
>|""".stripMargin
>   val t = failingDataSource(data).toTable(tEnv, 'a, 'c, 't)
>   tEnv.createTemporaryView("Table6", t)
>   val sink = new TestingRetractSink
>   tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink).setParallelism(1)
>   env.execute()
>   val expected =
> List(
>   "{\"30\":2}"
> )
>   assertThat(sink.getRetractResults).isEqualTo(expected)
> } {code}
> The result is as following.
> {code:java}
> //代码占位符
> List({"14":2,"30":2}) {code}
> However,  \{"14":2}  should be retracted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33689][table-runtime] Fix JsonObjectAggFunction can't retract records when enabling LocalGlobal. [flink]

2023-11-28 Thread via GitHub


xishuaidelin opened a new pull request, #23827:
URL: https://github.com/apache/flink/pull/23827

   
   
   
   
   ## What is the purpose of the change
   
   * This pull request fix JsonObjectAggFunction which can't retract records 
when enabling LocalGlobal. The retract function in JsonObjectAggFunction now 
directly deletes the key from the recorded map. When localGlobal is enabled, 
the content recorded in the previous stage has already been committed to the 
global stage. Therefore, if there is a need to retract the content from the 
previous stage, the existing accumulator's map does not store the corresponding 
content which leads to a retraction failure.*
   
   
   ## Brief change log
   
 - Modify the Acculator and functions in JsonObjectAggFunction to retract 
the content that needs to be retracted . 
 - add test. 
   
   
   ## Verifying this change
   
   Some tests are added to verify this pr.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-33664) Setup cron build for java 21

2023-11-28 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33664.
-
Resolution: Fixed

> Setup cron build for java 21
> 
>
> Key: FLINK-33664
> URL: https://issues.apache.org/jira/browse/FLINK-33664
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33689) jsonObjectAggFunction can't retract previous data which is invalid when enable local global agg

2023-11-28 Thread Shuai Xu (Jira)
Shuai Xu created FLINK-33689:


 Summary: jsonObjectAggFunction can't retract previous data which 
is invalid when enable local global agg
 Key: FLINK-33689
 URL: https://issues.apache.org/jira/browse/FLINK-33689
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.18.0
Reporter: Shuai Xu


Run the test as following and enable LocalGlobal and minibatch  in 
sql/AggregateITCase . 
{code:java}
//代码占位符
def testGroupJsonObjectAggWithRetract(): Unit = {
  val data = new mutable.MutableList[(Long, String, Long)]
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  data.+=((2L, "Hallo", 2L))
  val sql =
s"""
   |select
   |   JSON_OBJECTAGG(key k value v)
   |from (select
   | cast(SUM(a) as string) as k,t as v
   |   from
   | Table6
   |   group by t)
   |""".stripMargin
  val t = failingDataSource(data).toTable(tEnv, 'a, 'c, 't)
  tEnv.createTemporaryView("Table6", t)
  val sink = new TestingRetractSink
  tEnv.sqlQuery(sql).toRetractStream[Row].addSink(sink).setParallelism(1)
  env.execute()
  val expected =
List(
  "{\"30\":2}"
)
  assertThat(sink.getRetractResults).isEqualTo(expected)
} {code}
The result is as following.
{code:java}
//代码占位符
List({"14":2,"30":2}) {code}
However,  \{"14":2}  should be retracted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33664) Setup cron build for java 21

2023-11-28 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-33664.
---

> Setup cron build for java 21
> 
>
> Key: FLINK-33664
> URL: https://issues.apache.org/jira/browse/FLINK-33664
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33664) Setup cron build for java 21

2023-11-28 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790934#comment-17790934
 ] 

Sergey Nuyanzin commented on FLINK-33664:
-

Merged to master as 
[76f754833298beb3e7589d521e75325adf283bf3|https://github.com/apache/flink/commit/76f754833298beb3e7589d521e75325adf283bf3]

> Setup cron build for java 21
> 
>
> Key: FLINK-33664
> URL: https://issues.apache.org/jira/browse/FLINK-33664
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33668) Decoupling Shuffle network memory and job topology

2023-11-28 Thread Jiang Xin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiang Xin updated FLINK-33668:
--
Description: 
With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network memory 
and the parallelism of tasks by limiting the number of buffers for each 
InputGate and ResultPartition in Hybrid Shuffle. However, when too many shuffle 
tasks are running simultaneously on the same TaskManager, "Insufficient number 
of network buffers" errors would still occur. This usually happens when Slot 
Sharing Group is enabled or a TaskManager contains multiple slots.

We want to make sure that the TaskManager does not encounter "Insufficient 
number of network buffers" even if there are dozens of InputGates and 
ResultPartitions running on the same TaskManager simultaneously. I have given 
this some thought, and here is my rough proposal.

1. InputGate or ResultPartition only apply for buffers from LocalBufferPool, 
which means that InputGate will no longer ask for exclusive buffers from 
NetworkBufferPool directly.
2. When creating a LocalBufferPool, we need to specify the maximum, minimum, 
and expected number of buffers. Whenever a new LBP is created or destroyed, a 
redistribution will occur, to adjust the buffer count of all LocalBufferPools 
using the expected value as a weight and between the minimum and maximum 
values. According to the test, the minimum value can be set to 4 to make the 
Flink Job work despite the possibility of lower performance. With this minimum 
value, a task with 20 shuffle edge needs only 5MB memory to avoid insufficient 
network buffers error.
3. During runtime, InputGate and ResultPartition both calculate the number of 
buffers used by their internal data structures based on the pool size of their 
corresponding LocalBufferPool, such as exclusive buffer queue of InputGate and 
BufferAccumulator of ResultPartition.

  was:
With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network memory 
and the parallelism of tasks by limiting the number of buffers for each 
InputGate and ResultPartition in Hybrid Shuffle. However, when too many shuffle 
tasks are running simultaneously on the same TaskManager, "Insufficient number 
of network buffers" errors would still occur. This usually happens when Slot 
Sharing Group is enabled or a TaskManager contains multiple slots.

We want to make sure that the TaskManager does not encounter "Insufficient 
number of network buffers" even if there are dozens of InputGates and 
ResultPartitions running on the same TaskManager simultaneously.


> Decoupling Shuffle network memory and job topology
> --
>
> Key: FLINK-33668
> URL: https://issues.apache.org/jira/browse/FLINK-33668
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Jiang Xin
>Priority: Major
> Fix For: 1.19.0
>
>
> With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network 
> memory and the parallelism of tasks by limiting the number of buffers for 
> each InputGate and ResultPartition in Hybrid Shuffle. However, when too many 
> shuffle tasks are running simultaneously on the same TaskManager, 
> "Insufficient number of network buffers" errors would still occur. This 
> usually happens when Slot Sharing Group is enabled or a TaskManager contains 
> multiple slots.
> We want to make sure that the TaskManager does not encounter "Insufficient 
> number of network buffers" even if there are dozens of InputGates and 
> ResultPartitions running on the same TaskManager simultaneously. I have given 
> this some thought, and here is my rough proposal.
> 1. InputGate or ResultPartition only apply for buffers from LocalBufferPool, 
> which means that InputGate will no longer ask for exclusive buffers from 
> NetworkBufferPool directly.
> 2. When creating a LocalBufferPool, we need to specify the maximum, minimum, 
> and expected number of buffers. Whenever a new LBP is created or destroyed, a 
> redistribution will occur, to adjust the buffer count of all LocalBufferPools 
> using the expected value as a weight and between the minimum and maximum 
> values. According to the test, the minimum value can be set to 4 to make the 
> Flink Job work despite the possibility of lower performance. With this 
> minimum value, a task with 20 shuffle edge needs only 5MB memory to avoid 
> insufficient network buffers error.
> 3. During runtime, InputGate and ResultPartition both calculate the number of 
> buffers used by their internal data structures based on the pool size of 
> their corresponding LocalBufferPool, such as exclusive buffer queue of 
> InputGate and BufferAccumulator of ResultPartition.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32756) Reuse ClientHighAvailabilityServices when creating RestClusterClient

2023-11-28 Thread xiangyu feng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangyu feng updated FLINK-32756:
-
Parent: FLINK-33683
Issue Type: Sub-task  (was: Improvement)

> Reuse ClientHighAvailabilityServices when creating RestClusterClient
> 
>
> Key: FLINK-32756
> URL: https://issues.apache.org/jira/browse/FLINK-32756
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: xiangyu feng
>Assignee: xiangyu feng
>Priority: Major
>
> Currently, every newly built RestClusterClient will create a new 
> ClientHighAvailabilityServices which is both unnecessary and resource 
> consuming. For example, each ZooKeeperClientHAServices contains a ZKClient 
> which holds a connection to ZK server and several related threads.
> By reusing ClientHighAvailabilityServices across multiple RestClusterClient 
> instances, we can save system resources(threads, connections), connection 
> establish time and leader retrieval time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32756) Reuse ClientHighAvailabilityServices when creating RestClusterClient

2023-11-28 Thread xiangyu feng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangyu feng updated FLINK-32756:
-
Parent: (was: FLINK-25318)
Issue Type: Improvement  (was: Sub-task)

> Reuse ClientHighAvailabilityServices when creating RestClusterClient
> 
>
> Key: FLINK-32756
> URL: https://issues.apache.org/jira/browse/FLINK-32756
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Reporter: xiangyu feng
>Assignee: xiangyu feng
>Priority: Major
>
> Currently, every newly built RestClusterClient will create a new 
> ClientHighAvailabilityServices which is both unnecessary and resource 
> consuming. For example, each ZooKeeperClientHAServices contains a ZKClient 
> which holds a connection to ZK server and several related threads.
> By reusing ClientHighAvailabilityServices across multiple RestClusterClient 
> instances, we can save system resources(threads, connections), connection 
> establish time and leader retrieval time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33688) Reuse Channels in RestClient to save connection establish time

2023-11-28 Thread xiangyu feng (Jira)
xiangyu feng created FLINK-33688:


 Summary: Reuse Channels in RestClient to save connection establish 
time
 Key: FLINK-33688
 URL: https://issues.apache.org/jira/browse/FLINK-33688
 Project: Flink
  Issue Type: Sub-task
  Components: Client / Job Submission
Reporter: xiangyu feng


RestClient can reuse the connections to Dispatcher when submitting http 
requests to a long running Flink cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33687) Reuse RestClusterClient in StandaloneClusterDescriptor to avoid frequent thread create/destroy

2023-11-28 Thread xiangyu feng (Jira)
xiangyu feng created FLINK-33687:


 Summary: Reuse RestClusterClient in StandaloneClusterDescriptor to 
avoid frequent thread create/destroy
 Key: FLINK-33687
 URL: https://issues.apache.org/jira/browse/FLINK-33687
 Project: Flink
  Issue Type: Sub-task
  Components: Client / Job Submission
Reporter: xiangyu feng


`RestClusterClient` can also be reused when submitting programs to a 
long-running Flink Cluster



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] FLINK-33600][table] print the query time cost for batch query in cli [flink]

2023-11-28 Thread via GitHub


JingGe commented on PR #23809:
URL: https://github.com/apache/flink/pull/23809#issuecomment-1831318436

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-26013) Develop ArchUnit test (infra) for Flink core and runtime test code

2023-11-28 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790930#comment-17790930
 ] 

Jing Ge commented on FLINK-26013:
-

flink-core:

master: 05960a7ccf2a6900e87159a6925595b9df6289f2

> Develop ArchUnit test (infra) for Flink core and runtime test code
> --
>
> Key: FLINK-26013
> URL: https://issues.apache.org/jira/browse/FLINK-26013
> Project: Flink
>  Issue Type: Improvement
>  Components: Test Infrastructure
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Minor
>  Labels: pull-request-available
>
> ArchUnit test (infra) should be developed for Flink core and runtime related 
> submodules after the ArchUnit infra for test code has been built.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33686) Reuse StandaloneClusterDescriptor in RemoteExecutor when executing jobs on the same cluster

2023-11-28 Thread xiangyu feng (Jira)
xiangyu feng created FLINK-33686:


 Summary: Reuse StandaloneClusterDescriptor in RemoteExecutor when 
executing jobs on the same cluster
 Key: FLINK-33686
 URL: https://issues.apache.org/jira/browse/FLINK-33686
 Project: Flink
  Issue Type: Sub-task
  Components: Client / Job Submission
Reporter: xiangyu feng


Multiple `RemoteExecutor` instances can reuse the same 
`StandaloneClusterDescriptor` when executing jobs to a same running cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33685) StandaloneClusterId need to distinguish different remote clusters

2023-11-28 Thread xiangyu feng (Jira)
xiangyu feng created FLINK-33685:


 Summary: StandaloneClusterId need to distinguish different remote 
clusters
 Key: FLINK-33685
 URL: https://issues.apache.org/jira/browse/FLINK-33685
 Project: Flink
  Issue Type: Sub-task
  Components: Client / Job Submission
Reporter: xiangyu feng


`StandaloneClusterId` is a singleton, which means `StandaloneClusterDescriptor` 
cannot distinguish different remote running clusters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33684) Improve the retry strategy in CollectResultFetcher

2023-11-28 Thread xiangyu feng (Jira)
xiangyu feng created FLINK-33684:


 Summary: Improve the retry strategy in CollectResultFetcher
 Key: FLINK-33684
 URL: https://issues.apache.org/jira/browse/FLINK-33684
 Project: Flink
  Issue Type: Sub-task
  Components: API / DataStream
Reporter: xiangyu feng


Currently  CollectResultFetcher use a fixed retry interval.
{code:java}
private void sleepBeforeRetry() {
if (retryMillis <= 0) {
return;
}

try {
// TODO a more proper retry strategy?
Thread.sleep(retryMillis);
} catch (InterruptedException e) {
LOG.warn("Interrupted when sleeping before a retry", e);
}
} {code}
This can be improved with a WaitStrategy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix][core][test] fix ArchUnit typo in pom [flink]

2023-11-28 Thread via GitHub


flinkbot commented on PR #23826:
URL: https://github.com/apache/flink/pull/23826#issuecomment-1831307968

   
   ## CI report:
   
   * e01506ef2edf4b73bf86440f77512f50239fa0d5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33683) Improve the performance of submitting jobs and fetching results to a running flink cluster

2023-11-28 Thread xiangyu feng (Jira)
xiangyu feng created FLINK-33683:


 Summary: Improve the performance of submitting jobs and fetching 
results to a running flink cluster
 Key: FLINK-33683
 URL: https://issues.apache.org/jira/browse/FLINK-33683
 Project: Flink
  Issue Type: Improvement
  Components: Client / Job Submission, Table SQL / Client
Reporter: xiangyu feng


There is now a lot of unnecessary overhead involved in submitting jobs and 
fetching results to a long-running flink cluster. This works well for streaming 
and batch job, because in these scenarios users will not frequently submit jobs 
and fetch result to a running cluster.

 

But in OLAP scenario, users will continuously submit lots of short-lived jobs 
to the running cluster. In this situation, these overhead will have a huge 
impact on the E2E performance.  Here are some examples of unnecessary overhead:
 * Each `RemoteExecutor` will create a new `StandaloneClusterDescriptor` when 
executing a job on the same remote cluster
 * `StandaloneClusterDescriptor` will always create a new `RestClusterClient` 
when retrieving an existing Flink Cluster
 * Each `RestClusterClient` will create a new `ClientHighAvailabilityServices` 
which might contains a resource-consuming ha client(ZKClient or KubeClient) and 
a time-consuming leader retrieval operation
 * `RestClient` will create a new connection for every request which costs 
extra connection establishment time

 

Therefore, I suggest creating this ticket and following subtasks to improve 
this performance. This ticket is also relates to  FLINK-25318.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Adding AWS Connectors v4.2.0 [flink-web]

2023-11-28 Thread via GitHub


leonardBang commented on code in PR #693:
URL: https://github.com/apache/flink-web/pull/693#discussion_r1408813637


##
docs/data/release_archive.yml:
##
@@ -522,6 +522,11 @@ release_archive:
   version: 3.0.1
   release_date: 2023-10-30
   filename: "kafka"
+- name: "Flink AWS Connectors"
+  connector: "aws"

Review Comment:
   Should we list all dedicated connector names instead of a special module 
name here?
   
   Just a minor comment, not a blocker.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix][core][test] fix ArchUnit typo in pom [flink]

2023-11-28 Thread via GitHub


JingGe opened a new pull request, #23826:
URL: https://github.com/apache/flink/pull/23826

   ## What is the purpose of the change
   
   fix ArchUnit typo in pom of flink-core
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] fix typo in the pom files of flink-format modules for archunit [flink]

2023-11-28 Thread via GitHub


flinkbot commented on PR #23825:
URL: https://github.com/apache/flink/pull/23825#issuecomment-1831303244

   
   ## CI report:
   
   * 1f485ba5993aa6ddcac9bc5f06f1c5cd2bc9a47b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] fix typo in the pom files of flink-format modules for archunit [flink]

2023-11-28 Thread via GitHub


JingGe opened a new pull request, #23825:
URL: https://github.com/apache/flink/pull/23825

   ## What is the purpose of the change
   
   fix typo in the pom files for ArchUnit
   
   ## Verifying this change
   
   This change is a trivial typo fix without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31481][table] Support enhanced show databases syntax [flink]

2023-11-28 Thread via GitHub


jeyhunkarimov commented on code in PR #23612:
URL: https://github.com/apache/flink/pull/23612#discussion_r1408807116


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/SqlShowDatabasesConverter.java:
##
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.operations.converters;
+
+import org.apache.flink.sql.parser.dql.SqlShowDatabases;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.catalog.CatalogManager;
+import org.apache.flink.table.operations.Operation;
+import org.apache.flink.table.operations.ShowDatabasesOperation;
+
+/** A converter for {@link SqlShowDatabases}. */
+public class SqlShowDatabasesConverter implements 
SqlNodeConverter {
+
+@Override
+public Operation convertSqlNode(SqlShowDatabases sqlShowDatabases, 
ConvertContext context) {
+if (sqlShowDatabases.getPreposition() == null) {
+return new ShowDatabasesOperation(
+sqlShowDatabases.getLikeType(),
+sqlShowDatabases.getLikeSqlPattern(),
+sqlShowDatabases.isNotLike());
+} else {
+CatalogManager catalogManager = context.getCatalogManager();
+String[] fullCatalogName = sqlShowDatabases.getCatalog();
+if (fullCatalogName.length > 1) {

Review Comment:
   Yes, the `length` check is moved to the constructor of `SqlShowDatabases`, 
and I added a test case to catch the exception in `catalog_database.q `



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31481][table] Support enhanced show databases syntax [flink]

2023-11-28 Thread via GitHub


jeyhunkarimov commented on code in PR #23612:
URL: https://github.com/apache/flink/pull/23612#discussion_r1408795264


##
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/dql/SqlShowDatabases.java:
##
@@ -29,14 +31,55 @@
 import java.util.Collections;
 import java.util.List;
 
-/** SHOW Databases sql call. */
+import static java.util.Objects.requireNonNull;
+
+/**
+ * SHOW Databases sql call. The full syntax for show databases is as 
followings:
+ *
+ * {@code
+ * SHOW DATABASES [ ( FROM | IN ) catalog_name] [ [NOT] (LIKE | ILIKE)
+ *  ] statement
+ * }
+ */
 public class SqlShowDatabases extends SqlCall {
 
 public static final SqlSpecialOperator OPERATOR =
 new SqlSpecialOperator("SHOW DATABASES", SqlKind.OTHER);
 
-public SqlShowDatabases(SqlParserPos pos) {
+private final String preposition;
+private final SqlIdentifier catalogName;
+private final String likeType;
+private final SqlCharStringLiteral likeLiteral;
+private final boolean notLike;
+
+public String[] getCatalog() {
+return catalogName == null || catalogName.names.isEmpty()
+? new String[] {}
+: catalogName.names.toArray(new String[0]);
+}
+
+public SqlShowDatabases(
+SqlParserPos pos,
+String preposition,
+SqlIdentifier catalogName,
+String likeType,
+SqlCharStringLiteral likeLiteral,
+boolean notLike) {
 super(pos);
+this.preposition = preposition;
+this.catalogName =

Review Comment:
   Yes, good catch



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31481][table] Support enhanced show databases syntax [flink]

2023-11-28 Thread via GitHub


jeyhunkarimov commented on code in PR #23612:
URL: https://github.com/apache/flink/pull/23612#discussion_r1408794421


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ShowDatabasesOperation.java:
##
@@ -20,26 +20,108 @@
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.table.api.internal.TableResultInternal;
+import org.apache.flink.table.functions.SqlLikeUtils;
 
+import java.util.Arrays;
+
+import static java.util.Objects.requireNonNull;
 import static 
org.apache.flink.table.api.internal.TableResultUtils.buildStringArrayResult;
 
 /** Operation to describe a SHOW DATABASES statement. */
 @Internal
 public class ShowDatabasesOperation implements ShowOperation {
 
+private final String preposition;
+private final String catalogName;
+private final LikeType likeType;
+private final String likePattern;
+private final boolean notLike;
+
+public ShowDatabasesOperation() {
+// "SHOW DATABASES" command with all options being default
+this.preposition = null;
+this.catalogName = null;
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+
+public ShowDatabasesOperation(String likeType, String likePattern, boolean 
notLike) {
+this.preposition = null;
+this.catalogName = null;
+if (likeType != null) {
+this.likeType = LikeType.of(likeType);
+this.likePattern = requireNonNull(likePattern, "Like pattern must 
not be null");
+this.notLike = notLike;
+} else {
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+}
+
+public ShowDatabasesOperation(
+String preposition,
+String catalogName,
+String likeType,
+String likePattern,
+boolean notLike) {
+this.preposition = preposition;
+this.catalogName = catalogName;
+if (likeType != null) {
+this.likeType = LikeType.of(likeType);
+this.likePattern = requireNonNull(likePattern, "Like pattern must 
not be null");
+this.notLike = notLike;
+} else {
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+}
+
 @Override
 public String asSummaryString() {
-return "SHOW DATABASES";
+StringBuilder builder = new StringBuilder();
+builder.append("SHOW DATABASES");
+if (preposition != null) {
+builder.append(String.format(" %s %s", preposition, catalogName));
+}
+if (likeType != null) {
+if (notLike) {
+builder.append(String.format(" NOT %s '%s'", likeType.name(), 
likePattern));
+} else {
+builder.append(String.format(" %s '%s'", likeType.name(), 
likePattern));
+}
+}
+return builder.toString();
 }
 
 @Override
 public TableResultInternal execute(Context ctx) {
+String cName =
+catalogName == null ? 
ctx.getCatalogManager().getCurrentCatalog() : catalogName;
 String[] databases =
-ctx.getCatalogManager()
-
.getCatalogOrThrowException(ctx.getCatalogManager().getCurrentCatalog())
-.listDatabases().stream()
+
ctx.getCatalogManager().getCatalogOrThrowException(cName).listDatabases().stream()
 .sorted()
 .toArray(String[]::new);
+
+if (likeType != null) {
+databases =
+Arrays.stream(databases)
+.filter(
+row -> {
+if (likeType == LikeType.ILIKE) {
+return notLike
+!= SqlLikeUtils.ilike(row, 
likePattern, "\\");
+} else {

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31481][table] Support enhanced show databases syntax [flink]

2023-11-28 Thread via GitHub


jeyhunkarimov commented on code in PR #23612:
URL: https://github.com/apache/flink/pull/23612#discussion_r1408794805


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ShowDatabasesOperation.java:
##
@@ -20,26 +20,108 @@
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.table.api.internal.TableResultInternal;
+import org.apache.flink.table.functions.SqlLikeUtils;
 
+import java.util.Arrays;
+
+import static java.util.Objects.requireNonNull;
 import static 
org.apache.flink.table.api.internal.TableResultUtils.buildStringArrayResult;
 
 /** Operation to describe a SHOW DATABASES statement. */
 @Internal
 public class ShowDatabasesOperation implements ShowOperation {
 
+private final String preposition;
+private final String catalogName;
+private final LikeType likeType;
+private final String likePattern;
+private final boolean notLike;
+
+public ShowDatabasesOperation() {
+// "SHOW DATABASES" command with all options being default
+this.preposition = null;
+this.catalogName = null;
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+
+public ShowDatabasesOperation(String likeType, String likePattern, boolean 
notLike) {
+this.preposition = null;
+this.catalogName = null;
+if (likeType != null) {
+this.likeType = LikeType.of(likeType);
+this.likePattern = requireNonNull(likePattern, "Like pattern must 
not be null");
+this.notLike = notLike;
+} else {
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+}
+
+public ShowDatabasesOperation(
+String preposition,
+String catalogName,
+String likeType,
+String likePattern,
+boolean notLike) {
+this.preposition = preposition;
+this.catalogName = catalogName;
+if (likeType != null) {
+this.likeType = LikeType.of(likeType);
+this.likePattern = requireNonNull(likePattern, "Like pattern must 
not be null");
+this.notLike = notLike;
+} else {
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+}
+
 @Override
 public String asSummaryString() {
-return "SHOW DATABASES";
+StringBuilder builder = new StringBuilder();
+builder.append("SHOW DATABASES");
+if (preposition != null) {
+builder.append(String.format(" %s %s", preposition, catalogName));
+}
+if (likeType != null) {
+if (notLike) {
+builder.append(String.format(" NOT %s '%s'", likeType.name(), 
likePattern));
+} else {
+builder.append(String.format(" %s '%s'", likeType.name(), 
likePattern));
+}
+}
+return builder.toString();
 }
 
 @Override
 public TableResultInternal execute(Context ctx) {
+String cName =
+catalogName == null ? 
ctx.getCatalogManager().getCurrentCatalog() : catalogName;
 String[] databases =
-ctx.getCatalogManager()
-
.getCatalogOrThrowException(ctx.getCatalogManager().getCurrentCatalog())
-.listDatabases().stream()
+
ctx.getCatalogManager().getCatalogOrThrowException(cName).listDatabases().stream()
 .sorted()
 .toArray(String[]::new);
+
+if (likeType != null) {
+databases =
+Arrays.stream(databases)

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-33637) [JUnit5 Migration] Introduce ArchTest rules to ban Junit 4 for table-planner

2023-11-28 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-33637.

Fix Version/s: 1.19.0
   Resolution: Implemented

Implemented in master: b9f3b957cc136a65c55b0b93d5e0bab2b412f8f3

> [JUnit5 Migration] Introduce ArchTest rules to ban Junit 4 for table-planner
> 
>
> Key: FLINK-33637
> URL: https://issues.apache.org/jira/browse/FLINK-33637
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Currently, table-planner have already completed the migration to JUnit 5, and 
> we need to prevent the introduction of JUnit 4 tests in new PRs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33637) [JUnit5 Migration] Introduce ArchTest rules to ban Junit 4 for table-planner

2023-11-28 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-33637:
--

Assignee: Jiabao Sun

> [JUnit5 Migration] Introduce ArchTest rules to ban Junit 4 for table-planner
> 
>
> Key: FLINK-33637
> URL: https://issues.apache.org/jira/browse/FLINK-33637
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
>
> Currently, table-planner have already completed the migration to JUnit 5, and 
> we need to prevent the introduction of JUnit 4 tests in new PRs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32986][test] Fix createTemporaryFunction type inference error [flink]

2023-11-28 Thread via GitHub


jeyhunkarimov commented on PR #23586:
URL: https://github.com/apache/flink/pull/23586#issuecomment-1831282659

   > @jeyhunkarimov Thanks for contributing to Flink! Can you sort out the 
conflict that has happened?
   > 
   > @lincoln-lil may be the best person to review this PR.
   
   @jnh5y Thanks! Now conflicts resolved, CI is green. Sure, I will wait for 
@lincoln-lil 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][docs] Update minor version for 1.17.2 release [flink]

2023-11-28 Thread via GitHub


Myasuka merged PR #23824:
URL: https://github.com/apache/flink/pull/23824


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][docs] Update minor version for 1.17.2 release [flink]

2023-11-28 Thread via GitHub


yuchen-ecnu commented on PR #23824:
URL: https://github.com/apache/flink/pull/23824#issuecomment-1831281798

   @Myasuka Could you helps to merge this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33637][table-planner][JUnit5 Migration] Introduce ArchTest to ban Junit 4 for module table-planner [flink]

2023-11-28 Thread via GitHub


leonardBang merged PR #23791:
URL: https://github.com/apache/flink/pull/23791


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33637][table-planner][JUnit5 Migration] Introduce ArchTest to ban Junit 4 for module table-planner [flink]

2023-11-28 Thread via GitHub


Jiabao-Sun commented on PR #23791:
URL: https://github.com/apache/flink/pull/23791#issuecomment-1831280708

   squash is fine, thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix][docs] Update minor version for 1.17.2 release [flink]

2023-11-28 Thread via GitHub


yuchen-ecnu opened a new pull request, #23824:
URL: https://github.com/apache/flink/pull/23824

   ## What is the purpose of the change
   
   Update flink doc version for 1.17.2 release
   
   
   ## Brief change log
   
   - change flink doc version
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33398][runtime] Support switching from batch to stream mode for one input stream operator [flink]

2023-11-28 Thread via GitHub


xintongsong commented on code in PR #23521:
URL: https://github.com/apache/flink/pull/23521#discussion_r1406042500


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/streamrecord/RecordAttributes.java:
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.runtime.streamrecord;
+
+import org.apache.flink.annotation.Experimental;
+
+import java.util.Objects;
+
+/**
+ * A RecordAttributes element provides stream task with information that can 
be used to optimize the
+ * stream task's performance.
+ */
+@Experimental
+public class RecordAttributes extends StreamElement {
+private final boolean backlog;

Review Comment:
   Minor: I'd suggest the name `isBacklog`.



##
flink-runtime/src/main/java/org/apache/flink/runtime/source/event/BacklogEvent.java:
##
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.source.event;
+
+import org.apache.flink.runtime.operators.coordination.OperatorEvent;
+
+import java.util.Objects;
+
+/** A source event that notify the source operator of the backlog status. */
+public class BacklogEvent implements OperatorEvent {
+
+private static final long serialVersionUID = 1L;
+private final boolean backlog;
+
+public BacklogEvent(boolean backlog) {
+this.backlog = backlog;
+}
+
+public boolean isBacklog() {
+return backlog;
+}
+
+@Override
+public boolean equals(Object o) {
+if (this == o) {
+return true;
+}
+if (o == null || getClass() != o.getClass()) {
+return false;
+}
+final BacklogEvent that = (BacklogEvent) o;
+return Objects.equals(backlog, that.backlog);
+}

Review Comment:
   It would be a good practice to always implement the `hashCode()` together 
with `equals`, in case the class is accidentally used for a hash-map/set.



##
flink-runtime/src/test/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContextTest.java:
##
@@ -261,4 +262,26 @@ private List registerReaders() {
 
 return infos;
 }
+
+@Test
+void testSetIsProcessingBacklog() throws Exception {
+sourceReady();
+registerReader(0, 0);
+context.setIsProcessingBacklog(true);
+
+Thread.sleep(10); // sleep a little for the coordinator to handle 
reader register event.

Review Comment:
   Such sleep-based approach is usually problematic. If the time is too short, 
this could become a source of instability. If the time is too long, it slows 
down the entire CI.
   
   I'd suggest to keep checking whether the expected condition is reached with 
a short interval, and to fail if the condition is not reached within a relative 
long timeout.



##
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/streamrecord/RecordAttributes.java:
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *

[jira] [Comment Edited] (FLINK-33668) Decoupling Shuffle network memory and job topology

2023-11-28 Thread dalongliu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790844#comment-17790844
 ] 

dalongliu edited comment on FLINK-33668 at 11/29/23 5:58 AM:
-

Big +1, there also has  a depulicated issue:  
https://issues.apache.org/jira/browse/FLINK-33004


was (Author: lsy):
Big +1, there also has  a depulicated issue: 
https://issues.apache.org/jira/browse/FLINK-31643

> Decoupling Shuffle network memory and job topology
> --
>
> Key: FLINK-33668
> URL: https://issues.apache.org/jira/browse/FLINK-33668
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Jiang Xin
>Priority: Major
> Fix For: 1.19.0
>
>
> With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network 
> memory and the parallelism of tasks by limiting the number of buffers for 
> each InputGate and ResultPartition in Hybrid Shuffle. However, when too many 
> shuffle tasks are running simultaneously on the same TaskManager, 
> "Insufficient number of network buffers" errors would still occur. This 
> usually happens when Slot Sharing Group is enabled or a TaskManager contains 
> multiple slots.
> We want to make sure that the TaskManager does not encounter "Insufficient 
> number of network buffers" even if there are dozens of InputGates and 
> ResultPartitions running on the same TaskManager simultaneously.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33682][metrics] Reuse source operator input records/bytes metrics for SourceOperatorStreamTask [flink]

2023-11-28 Thread via GitHub


flinkbot commented on PR #23823:
URL: https://github.com/apache/flink/pull/23823#issuecomment-1831269268

   
   ## CI report:
   
   * fef2fd9e9a9803f547c9c3c5a374d8b2e186b6d8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33682) Reuse source operator input records/bytes metrics for SourceOperatorStreamTask

2023-11-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33682:
---
Labels: pull-request-available  (was: )

> Reuse source operator input records/bytes metrics for SourceOperatorStreamTask
> --
>
> Key: FLINK-33682
> URL: https://issues.apache.org/jira/browse/FLINK-33682
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
>
> For SourceOperatorStreamTask, source opeartor is the head operator that takes 
> input. We can directly reuse source operator input records/bytes metrics for 
> it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33682][metrics] Reuse source operator input records/bytes metrics for SourceOperatorStreamTask [flink]

2023-11-28 Thread via GitHub


X-czh opened a new pull request, #23823:
URL: https://github.com/apache/flink/pull/23823

   
   
   ## What is the purpose of the change
   
   Integrate standard source I/O metrics into SourceOperatorStreamTask, so that 
users can see source numRecordsIn & numBytesIn on the job overview page of web 
dashboard.
   
   ## Brief change log
   
   Reuse source operator input records/bytes metrics for 
SourceOperatorStreamTask.
   
   ## Verifying this change
   
   Manually verified on a Kafka source job.
   
   
![screenshot-20231129-131807_副本](https://github.com/apache/flink/assets/22020529/0bdc073c-b666-4341-b101-4d8cc96da111)
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33682) Reuse source operator input records/bytes metrics for SourceOperatorStreamTask

2023-11-28 Thread Fang Yong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fang Yong reassigned FLINK-33682:
-

Assignee: Zhanghao Chen

> Reuse source operator input records/bytes metrics for SourceOperatorStreamTask
> --
>
> Key: FLINK-33682
> URL: https://issues.apache.org/jira/browse/FLINK-33682
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
>
> For SourceOperatorStreamTask, source opeartor is the head operator that takes 
> input. We can directly reuse source operator input records/bytes metrics for 
> it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33680) Failed to build document with docker

2023-11-28 Thread wangshiheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangshiheng updated FLINK-33680:

Summary: Failed to build document with docker  (was: Failed to build 
document with docke)

> Failed to build document with docker
> 
>
> Key: FLINK-33680
> URL: https://issues.apache.org/jira/browse/FLINK-33680
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: wangshiheng
>Priority: Major
>  Labels: doc-site, docement
>
> Follow the documentation, the documentation comes from 
> [https://github.com/apache/flink/blob/master/docs/README.md]
>  
> The implementation results are as follows:
> {code:java}
> [root@bigdatadev workpace]# git clone https://github.com/apache/flink.git
> ...
> [root@bigdatadev workpace]# cd flink/docs/
> [root@bigdatadev docs]# ./setup_docs.sh
> [root@bigdatadev docs]# docker pull jakejarvis/hugo-extended:latest
> latest: Pulling from jakejarvis/hugo-extended
> Digest: 
> sha256:f659daa3b52693d8f6fc380e4fc5d0d3faf5b9c25ef260244ff67625c59c45a7
> Status: Image is up to date for jakejarvis/hugo-extended:latest
> docker.io/jakejarvis/hugo-extended:latest
> [root@bigdatadev docs]#  docker run -v $(pwd):/src -p 1313:1313 
> jakejarvis/hugo-extended:latest server --buildDrafts --buildFuture --bind 
> 0.0.0.0
> Start building sites … 
> hugo v0.113.0-085c1b3d614e23d218ebf9daad909deaa2390c9a+extended linux/amd64 
> BuildDate=2023-06-05T15:04:51Z VendorInfo=docker
> Built in 515 ms
> Error: error building site: assemble: "/src/content/_index.md:36:1": failed 
> to extract shortcode: template for shortcode "columns" not found
>  {code}
>  
> [root@bigdatadev docs]# vim content/_index.md
> {panel}
>  30 # Apache Flink Documentation
>  31 
>  32 {{< center >}}
>  33 *{*}Apache Flink{*}* is a framework and distributed processing engine for 
> stateful computations over *unbounded* and *bounded* data streams. Flink has 
> been designed to run in {*}all common cluster en    vironments{*}, perform 
> computations at *in-memory* speed and at {*}any scale{*}.
>  34 {{< /center >}}
>  35 
> {color:#de350b} 36 \{{< columns >}}{color}
>  37 
>  38 ### Try Flink
>  39 
>  40 If you’re interested in playing around with Flink, try one of our 
> tutorials:
>  41 
>  42 * [Fraud Detection with the DataStream API]({{{}< ref 
> "docs/try-flink/datastream" >{}}})
> {panel}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33680) Failed to build document with docke

2023-11-28 Thread wangshiheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangshiheng updated FLINK-33680:

Description: 
Follow the documentation, the documentation comes from 
[https://github.com/apache/flink/blob/master/docs/README.md]

 

The implementation results are as follows:
{code:java}
[root@bigdatadev workpace]# git clone https://github.com/apache/flink.git
...

[root@bigdatadev workpace]# cd flink/docs/
[root@bigdatadev docs]# ./setup_docs.sh


[root@bigdatadev docs]# docker pull jakejarvis/hugo-extended:latest
latest: Pulling from jakejarvis/hugo-extended
Digest: sha256:f659daa3b52693d8f6fc380e4fc5d0d3faf5b9c25ef260244ff67625c59c45a7
Status: Image is up to date for jakejarvis/hugo-extended:latest
docker.io/jakejarvis/hugo-extended:latest


[root@bigdatadev docs]#  docker run -v $(pwd):/src -p 1313:1313 
jakejarvis/hugo-extended:latest server --buildDrafts --buildFuture --bind 
0.0.0.0
Start building sites … 
hugo v0.113.0-085c1b3d614e23d218ebf9daad909deaa2390c9a+extended linux/amd64 
BuildDate=2023-06-05T15:04:51Z VendorInfo=docker
Built in 515 ms
Error: error building site: assemble: "/src/content/_index.md:36:1": failed to 
extract shortcode: template for shortcode "columns" not found
 {code}
 

[root@bigdatadev docs]# vim content/_index.md
{panel}
 30 # Apache Flink Documentation
 31 
 32 {{< center >}}
 33 *{*}Apache Flink{*}* is a framework and distributed processing engine for 
stateful computations over *unbounded* and *bounded* data streams. Flink has 
been designed to run in {*}all common cluster en    vironments{*}, perform 
computations at *in-memory* speed and at {*}any scale{*}.
 34 {{< /center >}}
 35 
{color:#de350b} 36 \{{< columns >}}{color}
 37 
 38 ### Try Flink
 39 
 40 If you’re interested in playing around with Flink, try one of our tutorials:
 41 
 42 * [Fraud Detection with the DataStream API]({{{}< ref 
"docs/try-flink/datastream" >{}}})
{panel}
 

  was:
Follow the documentation, the documentation comes from 
[https://github.com/apache/flink/blob/master/docs/README.md]

 

The implementation results are as follows:
{code:java}
[root@bigdatadev workpace]# git clone https://github.com/section9-lab/flink.git
...

[root@bigdatadev workpace]# cd flink/docs/
[root@bigdatadev docs]# ./setup_docs.sh


[root@bigdatadev docs]# docker pull jakejarvis/hugo-extended:latest
latest: Pulling from jakejarvis/hugo-extended
Digest: sha256:f659daa3b52693d8f6fc380e4fc5d0d3faf5b9c25ef260244ff67625c59c45a7
Status: Image is up to date for jakejarvis/hugo-extended:latest
docker.io/jakejarvis/hugo-extended:latest


[root@bigdatadev docs]#  docker run -v $(pwd):/src -p 1313:1313 
jakejarvis/hugo-extended:latest server --buildDrafts --buildFuture --bind 
0.0.0.0
Start building sites … 
hugo v0.113.0-085c1b3d614e23d218ebf9daad909deaa2390c9a+extended linux/amd64 
BuildDate=2023-06-05T15:04:51Z VendorInfo=docker
Built in 515 ms
Error: error building site: assemble: "/src/content/_index.md:36:1": failed to 
extract shortcode: template for shortcode "columns" not found
 {code}
 

[root@bigdatadev docs]# vim content/_index.md
{panel}
 30 # Apache Flink Documentation
 31 
 32 \{{< center >}}
 33 **Apache Flink** is a framework and distributed processing engine for 
stateful computations over *unbounded* and *bounded* data streams. Flink has 
been designed to run in *all common cluster en    vironments*, perform 
computations at *in-memory* speed and at *any scale*.
 34 \{{< /center >}}
 35 
{color:#de350b} 36 \{{< columns >}}{color}
 37 
 38 ### Try Flink
 39 
 40 If you’re interested in playing around with Flink, try one of our tutorials:
 41 
 42 * [Fraud Detection with the DataStream API](\{{< ref 
"docs/try-flink/datastream" >}})
{panel}
 


> Failed to build document with docke
> ---
>
> Key: FLINK-33680
> URL: https://issues.apache.org/jira/browse/FLINK-33680
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: wangshiheng
>Priority: Major
>  Labels: doc-site, docement
>
> Follow the documentation, the documentation comes from 
> [https://github.com/apache/flink/blob/master/docs/README.md]
>  
> The implementation results are as follows:
> {code:java}
> [root@bigdatadev workpace]# git clone https://github.com/apache/flink.git
> ...
> [root@bigdatadev workpace]# cd flink/docs/
> [root@bigdatadev docs]# ./setup_docs.sh
> [root@bigdatadev docs]# docker pull jakejarvis/hugo-extended:latest
> latest: Pulling from jakejarvis/hugo-extended
> Digest: 
> sha256:f659daa3b52693d8f6fc380e4fc5d0d3faf5b9c25ef260244ff67625c59c45a7
> Status: Image is up to date for jakejarvis/hugo-extended:latest
> docker.io/jakejarvis/hugo-extended:latest
> [root@bigdatadev docs]#  docker run -v $(pwd):/src -p 1313:1313 
> jakejarvis/hugo-extended:latest 

[jira] [Updated] (FLINK-33680) Failed to build document with docke

2023-11-28 Thread wangshiheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangshiheng updated FLINK-33680:

Description: 
Follow the documentation, the documentation comes from 
[https://github.com/apache/flink/blob/master/docs/README.md]

 

The implementation results are as follows:
{code:java}
[root@bigdatadev workpace]# git clone https://github.com/section9-lab/flink.git
...

[root@bigdatadev workpace]# cd flink/docs/
[root@bigdatadev docs]# ./setup_docs.sh


[root@bigdatadev docs]# docker pull jakejarvis/hugo-extended:latest
latest: Pulling from jakejarvis/hugo-extended
Digest: sha256:f659daa3b52693d8f6fc380e4fc5d0d3faf5b9c25ef260244ff67625c59c45a7
Status: Image is up to date for jakejarvis/hugo-extended:latest
docker.io/jakejarvis/hugo-extended:latest


[root@bigdatadev docs]#  docker run -v $(pwd):/src -p 1313:1313 
jakejarvis/hugo-extended:latest server --buildDrafts --buildFuture --bind 
0.0.0.0
Start building sites … 
hugo v0.113.0-085c1b3d614e23d218ebf9daad909deaa2390c9a+extended linux/amd64 
BuildDate=2023-06-05T15:04:51Z VendorInfo=docker
Built in 515 ms
Error: error building site: assemble: "/src/content/_index.md:36:1": failed to 
extract shortcode: template for shortcode "columns" not found
 {code}
 

[root@bigdatadev docs]# vim content/_index.md
{panel}
 30 # Apache Flink Documentation
 31 
 32 \{{< center >}}
 33 **Apache Flink** is a framework and distributed processing engine for 
stateful computations over *unbounded* and *bounded* data streams. Flink has 
been designed to run in *all common cluster en    vironments*, perform 
computations at *in-memory* speed and at *any scale*.
 34 \{{< /center >}}
 35 
{color:#de350b} 36 \{{< columns >}}{color}
 37 
 38 ### Try Flink
 39 
 40 If you’re interested in playing around with Flink, try one of our tutorials:
 41 
 42 * [Fraud Detection with the DataStream API](\{{< ref 
"docs/try-flink/datastream" >}})
{panel}
 

  was:
Follow the documentation, the documentation comes from 
[https://github.com/apache/flink/blob/master/docs/README.md]

 

The implementation results are as follows:
{code:java}
[root@bigdatadev workpace]# git clone https://github.com/section9-lab/flink.git
...

[root@bigdatadev workpace]# cd flink/docs/
[root@bigdatadev docs]# ./setup_docs.sh


[root@bigdatadev docs]# docker pull jakejarvis/hugo-extended:latest
latest: Pulling from jakejarvis/hugo-extended
Digest: sha256:f659daa3b52693d8f6fc380e4fc5d0d3faf5b9c25ef260244ff67625c59c45a7
Status: Image is up to date for jakejarvis/hugo-extended:latest
docker.io/jakejarvis/hugo-extended:latest


[root@bigdatadev docs]#  docker run -v $(pwd):/src -p 1313:1313 
jakejarvis/hugo-extended:latest server --buildDrafts --buildFuture --bind 
0.0.0.0
Start building sites … 
hugo v0.113.0-085c1b3d614e23d218ebf9daad909deaa2390c9a+extended linux/amd64 
BuildDate=2023-06-05T15:04:51Z VendorInfo=docker
Built in 515 ms
Error: error building site: assemble: "/src/content/_index.md:36:1": failed to 
extract shortcode: template for shortcode "columns" not found
 {code}
 

 

 


> Failed to build document with docke
> ---
>
> Key: FLINK-33680
> URL: https://issues.apache.org/jira/browse/FLINK-33680
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: wangshiheng
>Priority: Major
>  Labels: doc-site, docement
>
> Follow the documentation, the documentation comes from 
> [https://github.com/apache/flink/blob/master/docs/README.md]
>  
> The implementation results are as follows:
> {code:java}
> [root@bigdatadev workpace]# git clone 
> https://github.com/section9-lab/flink.git
> ...
> [root@bigdatadev workpace]# cd flink/docs/
> [root@bigdatadev docs]# ./setup_docs.sh
> [root@bigdatadev docs]# docker pull jakejarvis/hugo-extended:latest
> latest: Pulling from jakejarvis/hugo-extended
> Digest: 
> sha256:f659daa3b52693d8f6fc380e4fc5d0d3faf5b9c25ef260244ff67625c59c45a7
> Status: Image is up to date for jakejarvis/hugo-extended:latest
> docker.io/jakejarvis/hugo-extended:latest
> [root@bigdatadev docs]#  docker run -v $(pwd):/src -p 1313:1313 
> jakejarvis/hugo-extended:latest server --buildDrafts --buildFuture --bind 
> 0.0.0.0
> Start building sites … 
> hugo v0.113.0-085c1b3d614e23d218ebf9daad909deaa2390c9a+extended linux/amd64 
> BuildDate=2023-06-05T15:04:51Z VendorInfo=docker
> Built in 515 ms
> Error: error building site: assemble: "/src/content/_index.md:36:1": failed 
> to extract shortcode: template for shortcode "columns" not found
>  {code}
>  
> [root@bigdatadev docs]# vim content/_index.md
> {panel}
>  30 # Apache Flink Documentation
>  31 
>  32 \{{< center >}}
>  33 **Apache Flink** is a framework and distributed processing engine for 
> stateful computations over *unbounded* and *bounded* data streams. Flink has 
> 

[jira] [Created] (FLINK-33682) Reuse source operator input records/bytes metrics for SourceOperatorStreamTask

2023-11-28 Thread Zhanghao Chen (Jira)
Zhanghao Chen created FLINK-33682:
-

 Summary: Reuse source operator input records/bytes metrics for 
SourceOperatorStreamTask
 Key: FLINK-33682
 URL: https://issues.apache.org/jira/browse/FLINK-33682
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Metrics
Reporter: Zhanghao Chen


For SourceOperatorStreamTask, source opeartor is the head operator that takes 
input. We can directly reuse source operator input records/bytes metrics for it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33681) Display source/sink numRecordsIn/Out & numBytesIn/Out on UI

2023-11-28 Thread Zhanghao Chen (Jira)
Zhanghao Chen created FLINK-33681:
-

 Summary: Display source/sink numRecordsIn/Out & numBytesIn/Out on 
UI
 Key: FLINK-33681
 URL: https://issues.apache.org/jira/browse/FLINK-33681
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Metrics
Reporter: Zhanghao Chen
 Attachments: image-2023-11-29-13-26-15-176.png

Currently, the numRecordsIn & numBytesIn metrics for sources and the 
numRecordsOut & numBytesOut metrics for sinks are always 0 on the Flink web 
dashboard.

[FLINK-11576|https://issues.apache.org/jira/browse/FLINK-11576] brings us these 
metrics on the opeartor level, but it does not integrate them on the task 
level. On the other hand, the summay metrics on the job overview page is based 
on the task level I/O metrics. As a result, even though new connectors 
supporting FLIP-33 metrics will report operator-level I/O metrics, we still 
cannot see the metrics on dashboard.

This ticket serves as an umbrella issue to integrate standard source/sink I/O 
metrics with the corresponding task I/O metrics. 

!image-2023-11-29-13-26-15-176.png|width=590,height=252!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33680) Failed to build document with docke

2023-11-28 Thread wangshiheng (Jira)
wangshiheng created FLINK-33680:
---

 Summary: Failed to build document with docke
 Key: FLINK-33680
 URL: https://issues.apache.org/jira/browse/FLINK-33680
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.19.0
Reporter: wangshiheng


Follow the documentation, the documentation comes from 
[https://github.com/apache/flink/blob/master/docs/README.md]

 

The implementation results are as follows:
{code:java}
[root@bigdatadev workpace]# git clone https://github.com/section9-lab/flink.git
...

[root@bigdatadev workpace]# cd flink/docs/
[root@bigdatadev docs]# ./setup_docs.sh


[root@bigdatadev docs]# docker pull jakejarvis/hugo-extended:latest
latest: Pulling from jakejarvis/hugo-extended
Digest: sha256:f659daa3b52693d8f6fc380e4fc5d0d3faf5b9c25ef260244ff67625c59c45a7
Status: Image is up to date for jakejarvis/hugo-extended:latest
docker.io/jakejarvis/hugo-extended:latest


[root@bigdatadev docs]#  docker run -v $(pwd):/src -p 1313:1313 
jakejarvis/hugo-extended:latest server --buildDrafts --buildFuture --bind 
0.0.0.0
Start building sites … 
hugo v0.113.0-085c1b3d614e23d218ebf9daad909deaa2390c9a+extended linux/amd64 
BuildDate=2023-06-05T15:04:51Z VendorInfo=docker
Built in 515 ms
Error: error building site: assemble: "/src/content/_index.md:36:1": failed to 
extract shortcode: template for shortcode "columns" not found
 {code}
 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33637][table-planner][JUnit5 Migration] Introduce ArchTest to ban Junit 4 for module table-planner [flink]

2023-11-28 Thread via GitHub


JingGe commented on PR #23791:
URL: https://github.com/apache/flink/pull/23791#issuecomment-1831248182

> Thanks @JingGe for the review. I did not explicitly run the ArchUnit test 
in other modules. The violations reduction of the ArchUnit test of other 
modules in this PR is due to the repair of the Base classes in the 
flink-table-planner and there is no new violations that introduced by this PR.
   
   Makes sense. Thanks for the info. Would you like to squash commits and 
rebase? 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33637][table-planner][JUnit5 Migration] Introduce ArchTest to ban Junit 4 for module table-planner [flink]

2023-11-28 Thread via GitHub


Jiabao-Sun commented on PR #23791:
URL: https://github.com/apache/flink/pull/23791#issuecomment-1831231617

   > Thanks @Jiabao-Sun for the update. There are some ArchUnit violations in 
other modules. Did you run the ArchUnit test in those modules explicitly?
   
   Thanks @JingGe for the review.
   I did not explicitly run the ArchUnit test in other modules. 
   The violations reduction of the ArchUnit test of other modules in this PR is 
due to the repair of the Base classes in the flink-table-planner and there is 
no new violations that introduced by this PR. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33670) Public operators cannot be reused in multi sinks

2023-11-28 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790898#comment-17790898
 ] 

Benchao Li commented on FLINK-33670:


I would take this as a CTE(common table expression) optimization. Currently, 
there is a {{SubplanReuser}} to reuse RelNode as much as possible based on 
digest after optimization. It's limited since it's done after optimization, as 
you can see in this Jira's description.

One way to improve that is make "temporary view" as a real "common table 
expression" in the whole optimization process.

> Public operators cannot be reused in multi sinks
> 
>
> Key: FLINK-33670
> URL: https://issues.apache.org/jira/browse/FLINK-33670
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Lyn Zhang
>Priority: Major
> Attachments: image-2023-11-28-14-31-30-153.png
>
>
> Dear all:
>    I find that some public operators cannot be reused when submit a job with 
> multi sinks. I have an example as follows:
> {code:java}
> CREATE TABLE source (
>     id              STRING,
>     ts              TIMESTAMP(3),
>     v              BIGINT,
>     WATERMARK FOR ts AS ts - INTERVAL '3' SECOND
> ) WITH (...);
> CREATE VIEW source_distinct AS
> SELECT * FROM (
>     SELECT *, ROW_NUMBER() OVER w AS row_nu
>     FROM source
>     WINDOW w AS (PARTITION BY id ORDER BY proctime() ASC)
> ) WHERE row_nu = 1;
> CREATE TABLE print1 (
>      id             STRING,
>      ts             TIMESTAMP(3)
> ) WITH('connector' = 'blackhole');
> INSERT INTO print1 SELECT id, ts FROM source_distinct;
> CREATE TABLE print2 (
>      id              STRING,
>      ts              TIMESTAMP(3),
>      v               BIGINT
> ) WITH('connector' = 'blackhole');
> INSERT INTO print2
> SELECT id, TUMBLE_START(ts, INTERVAL '20' SECOND), SUM(v)
> FROM source_distinct
> GROUP BY TUMBLE(ts, INTERVAL '20' SECOND), id; {code}
> !image-2023-11-28-14-31-30-153.png|width=384,height=145!
> I try to check the code, Flink add the rule of CoreRules.PROJECT_MERGE by 
> default, This will create different digests of  the deduplicate operator and 
> finally fail to match same sub plan.
> In real production environment, Reuse same sub plan like deduplicate is more 
> worthy than project merge. A good solution is to interrupt the project merge 
> crossing shuffle operators in multi sinks cases. 
> How did you consider it? Looking forward to your reply.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] FLINK-33600][table] print the query time cost for batch query in cli [flink]

2023-11-28 Thread via GitHub


JingGe commented on code in PR #23809:
URL: https://github.com/apache/flink/pull/23809#discussion_r1408745083


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/print/TableauStyle.java:
##
@@ -117,6 +122,10 @@ public final class TableauStyle implements PrintStyle {
 
 @Override
 public void print(Iterator it, PrintWriter printWriter) {
+print(it, printWriter, -1);
+}
+
+public void print(Iterator it, PrintWriter printWriter, long 
queryBeginTime) {

Review Comment:
   Not sure I understand you correctly. For batch result the footer line is 
printed/controlled by `TableauStyle`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] FLINK-33600][table] print the query time cost for batch query in cli [flink]

2023-11-28 Thread via GitHub


JingGe commented on code in PR #23809:
URL: https://github.com/apache/flink/pull/23809#discussion_r1408742394


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/print/TableauStyle.java:
##
@@ -92,12 +95,14 @@ public final class TableauStyle implements PrintStyle {
 int[] columnWidths,
 int maxColumnWidth,
 boolean printNullAsEmpty,
-boolean printRowKind) {
+boolean printRowKind,
+boolean printQueryTimeCost) {

Review Comment:
   both `printQueryTimeCost` and the input parameter `queryBeginTime` will be 
used to make decision whether the time cost should be displayed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33637][table-planner][JUnit5 Migration] Introduce ArchTest to ban Junit 4 for module table-planner [flink]

2023-11-28 Thread via GitHub


JingGe commented on PR #23791:
URL: https://github.com/apache/flink/pull/23791#issuecomment-1831209459

   Thanks @Jiabao-Sun for the update. There are some ArchUnit violations in 
other modules. Did you run the ArchUnit test in those modules explicitly?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26010] Develop archunit test for filesystem test code [flink]

2023-11-28 Thread via GitHub


JingGe merged PR #23798:
URL: https://github.com/apache/flink/pull/23798


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33674) In the wmassigners module, words are misspelled

2023-11-28 Thread wangshiheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790889#comment-17790889
 ] 

wangshiheng commented on FLINK-33674:
-

Thank you very much, the documentation on the contributor guide has taught me a 
lot. Just started trying to contribute to the flink community, what do I need 
to do about this PR?

> In the wmassigners module, words are misspelled
> ---
>
> Key: FLINK-33674
> URL: https://issues.apache.org/jira/browse/FLINK-33674
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: wangshiheng
>Priority: Minor
>  Labels: easyfix, pull-request-available
> Fix For: 1.19.0
>
>
> RowTimeMiniBatch{color:#FF}Assginer{color}Operator changed to 
> RowTimeMiniBatch{color:#ff8b00}Assigner{color}Operator
> RowTimeMiniBatch{color:#FF}Assginer{color}OperatorTest changed to 
> RowTimeMiniBatch{color:#ff8b00}Assigner{color}OperatorTest



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Add release 1.17.2 [flink-web]

2023-11-28 Thread via GitHub


Myasuka merged PR #696:
URL: https://github.com/apache/flink-web/pull/696


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33679) RestoreMode uses NO_CLAIM as default instead of LEGACY

2023-11-28 Thread junzhong qin (Jira)
junzhong qin created FLINK-33679:


 Summary: RestoreMode uses NO_CLAIM as default instead of LEGACY
 Key: FLINK-33679
 URL: https://issues.apache.org/jira/browse/FLINK-33679
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Runtime / State Backends
Reporter: junzhong qin


RestoreMode uses NO_CLAIM as default instead of LEGACY.
{code:java}
public enum RestoreMode implements DescribedEnum {
CLAIM(
"Flink will take ownership of the given snapshot. It will clean the"
+ " snapshot once it is subsumed by newer ones."),
NO_CLAIM(
"Flink will not claim ownership of the snapshot files. However it 
will make sure it"
+ " does not depend on any artefacts from the restored 
snapshot. In order to do that,"
+ " Flink will take the first checkpoint as a full one, 
which means it might"
+ " reupload/duplicate files that are part of the restored 
checkpoint."),
LEGACY(
"This is the mode in which Flink worked so far. It will not claim 
ownership of the"
+ " snapshot and will not delete the files. However, it can 
directly depend on"
+ " the existence of the files of the restored checkpoint. 
It might not be safe"
+ " to delete checkpoints that were restored in legacy mode 
");

private final String description;

RestoreMode(String description) {
this.description = description;
}

@Override
@Internal
public InlineElement getDescription() {
return text(description);
}

public static final RestoreMode DEFAULT = NO_CLAIM;
} {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33395) The join hint doesn't work when appears in subquery

2023-11-28 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang updated FLINK-33395:
-
Fix Version/s: 1.17.3
   (was: 1.17.2)

> The join hint doesn't work when appears in subquery
> ---
>
> Key: FLINK-33395
> URL: https://issues.apache.org/jira/browse/FLINK-33395
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0, 1.17.0, 1.18.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.1, 1.17.3
>
>
> See the existent test 
> 'NestLoopJoinHintTest#testJoinHintWithJoinHintInCorrelateAndWithAgg', the 
> test plan is 
> {code:java}
> HashJoin(joinType=[LeftSemiJoin], where=[=(a1, EXPR$0)], select=[a1, b1], 
> build=[right], tryDistinctBuildRow=[true])
> :- Exchange(distribution=[hash[a1]])
> :  +- TableSourceScan(table=[[default_catalog, default_database, T1]], 
> fields=[a1, b1])
> +- Exchange(distribution=[hash[EXPR$0]])
>+- LocalHashAggregate(groupBy=[EXPR$0], select=[EXPR$0])
>   +- Calc(select=[EXPR$0])
>  +- HashAggregate(isMerge=[true], groupBy=[a1], select=[a1, 
> Final_COUNT(count$0) AS EXPR$0])
> +- Exchange(distribution=[hash[a1]])
>+- LocalHashAggregate(groupBy=[a1], select=[a1, 
> Partial_COUNT(a2) AS count$0])
>   +- NestedLoopJoin(joinType=[InnerJoin], where=[=(a2, a1)], 
> select=[a2, a1], build=[right])
>  :- TableSourceScan(table=[[default_catalog, 
> default_database, T2, project=[a2], metadata=[]]], fields=[a2], 
> hints=[[[ALIAS options:[T2)
>  +- Exchange(distribution=[broadcast])
> +- TableSourceScan(table=[[default_catalog, 
> default_database, T1, project=[a1], metadata=[]]], fields=[a1], 
> hints=[[[ALIAS options:[T1) {code}
> but the NestedLoopJoin should broadcase left side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33678) Remove configuration getters/setters that return/set complex Java objects

2023-11-28 Thread Junrui Li (Jira)
Junrui Li created FLINK-33678:
-

 Summary: Remove configuration getters/setters that return/set 
complex Java objects
 Key: FLINK-33678
 URL: https://issues.apache.org/jira/browse/FLINK-33678
 Project: Flink
  Issue Type: Sub-task
  Components: API / Core
Reporter: Junrui Li


FLINK-33581/FLIP-381 Deprecate configuration getters/setters that return/set 
complex Java objects. 
In Flink 2.0 we should remove these deprecated method and fields. This change 
will prevent users from configuring their jobs by passing complex Java objects, 
encouraging them to use ConfigOption instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-26694][table] Support lookup join via a multi-level inheritance of TableFunction [flink]

2023-11-28 Thread via GitHub


YesOrNo828 commented on PR #23684:
URL: https://github.com/apache/flink/pull/23684#issuecomment-1831101003

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33644) FLIP-393: Make QueryOperations SQL serializable

2023-11-28 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790865#comment-17790865
 ] 

Shengkai Fang commented on FLINK-33644:
---

Hi [~dwysakowicz]. I am confused that how can we get the SQL from the 
ModifyOperation? Why this FLIP only involves QueryOperation?

> FLIP-393: Make QueryOperations SQL serializable
> ---
>
> Key: FLINK-33644
> URL: https://issues.apache.org/jira/browse/FLINK-33644
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.19.0
>
>
> https://cwiki.apache.org/confluence/x/4guZE



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32756) Reuse ClientHighAvailabilityServices when creating RestClusterClient

2023-11-28 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790862#comment-17790862
 ] 

Yangze Guo commented on FLINK-32756:


[~xiangyu0xf] Done, go ahead!

> Reuse ClientHighAvailabilityServices when creating RestClusterClient
> 
>
> Key: FLINK-32756
> URL: https://issues.apache.org/jira/browse/FLINK-32756
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: xiangyu feng
>Assignee: xiangyu feng
>Priority: Major
>
> Currently, every newly built RestClusterClient will create a new 
> ClientHighAvailabilityServices which is both unnecessary and resource 
> consuming. For example, each ZooKeeperClientHAServices contains a ZKClient 
> which holds a connection to ZK server and several related threads.
> By reusing ClientHighAvailabilityServices across multiple RestClusterClient 
> instances, we can save system resources(threads, connections), connection 
> establish time and leader retrieval time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32756) Reuse ClientHighAvailabilityServices when creating RestClusterClient

2023-11-28 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo reassigned FLINK-32756:
--

Assignee: xiangyu feng

> Reuse ClientHighAvailabilityServices when creating RestClusterClient
> 
>
> Key: FLINK-32756
> URL: https://issues.apache.org/jira/browse/FLINK-32756
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: xiangyu feng
>Assignee: xiangyu feng
>Priority: Major
>
> Currently, every newly built RestClusterClient will create a new 
> ClientHighAvailabilityServices which is both unnecessary and resource 
> consuming. For example, each ZooKeeperClientHAServices contains a ZKClient 
> which holds a connection to ZK server and several related threads.
> By reusing ClientHighAvailabilityServices across multiple RestClusterClient 
> instances, we can save system resources(threads, connections), connection 
> establish time and leader retrieval time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33677) Remove flink-conf.yaml from flink dist

2023-11-28 Thread Zhu Zhu (Jira)
Zhu Zhu created FLINK-33677:
---

 Summary: Remove flink-conf.yaml from flink dist
 Key: FLINK-33677
 URL: https://issues.apache.org/jira/browse/FLINK-33677
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Configuration
Reporter: Zhu Zhu


FLINK-33297/FLIP-366 supports parsing standard YAML files for Flink 
configuration. A new configuration file config.yaml, which should be a standard 
YAML file, is introduced.
To ensure compatibility, in Flink 1.x, the old configuration parser will still 
be used if the old configuration file flink-conf.yaml exists. Only if it does 
not exist, the new configuration file will be used.
In Flink 2.0, we should remove the old configuration file from flink dist, as 
well as the old configuration parser.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33668) Decoupling Shuffle network memory and job topology

2023-11-28 Thread Jiang Xin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiang Xin updated FLINK-33668:
--
Description: 
With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network memory 
and the parallelism of tasks by limiting the number of buffers for each 
InputGate and ResultPartition in Hybrid Shuffle. However, when too many shuffle 
tasks are running simultaneously on the same TaskManager, "Insufficient number 
of network buffers" errors would still occur. This usually happens when Slot 
Sharing Group is enabled or a TaskManager contains multiple slots.

We want to make sure that the TaskManager does not encounter "Insufficient 
number of network buffers" even if there are dozens of InputGates and 
ResultPartitions running on the same TaskManager simultaneously.

  was:
With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network memory 
and the parallelism of tasks by limiting the number of buffers for each 
InputGate and ResultPartition. However, when too many shuffle tasks are running 
simultaneously on the same TaskManager, "Insufficient number of network 
buffers" errors would still occur. This usually happens when Slot Sharing Group 
is enabled or a TaskManager contains multiple slots.

We want to make sure that the TaskManager does not encounter "Insufficient 
number of network buffers" even if there are dozens of InputGates and 
ResultPartitions running on the same TaskManager simultaneously.


> Decoupling Shuffle network memory and job topology
> --
>
> Key: FLINK-33668
> URL: https://issues.apache.org/jira/browse/FLINK-33668
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Jiang Xin
>Priority: Major
> Fix For: 1.19.0
>
>
> With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network 
> memory and the parallelism of tasks by limiting the number of buffers for 
> each InputGate and ResultPartition in Hybrid Shuffle. However, when too many 
> shuffle tasks are running simultaneously on the same TaskManager, 
> "Insufficient number of network buffers" errors would still occur. This 
> usually happens when Slot Sharing Group is enabled or a TaskManager contains 
> multiple slots.
> We want to make sure that the TaskManager does not encounter "Insufficient 
> number of network buffers" even if there are dozens of InputGates and 
> ResultPartitions running on the same TaskManager simultaneously.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33638) Support variable-length data generation for variable-length data types

2023-11-28 Thread Yubin Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790860#comment-17790860
 ] 

Yubin Li commented on FLINK-33638:
--

[~lincoln] I have finished the feature according to 
https://issues.apache.org/jira/browse/FLINK-25284

now CI has passed, PTAL, thanks :D 

> Support variable-length data generation for variable-length data types
> --
>
> Key: FLINK-33638
> URL: https://issues.apache.org/jira/browse/FLINK-33638
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.19.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
>
> Currently, for variable-length data types (varchar, varbinary, string, 
> bytes), datagen connector always generates max-length data, we can extending 
> datagen to generate variable length values(using a new option to enable it, 
> e.g.,'fields.f0.var-len'='true').
> the topic has been discussed in the mail thread 
> [https://lists.apache.org/thread/kp6popo4cnhl6vx31sdn6mlscpzj9tgc]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32269) CreateTableAsITCase.testCreateTableAsInStatementSet fails on AZP

2023-11-28 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu closed FLINK-32269.
--
Fix Version/s: 1.18.1
   1.17.3
   Resolution: Fixed

Fixed in

master(1.19): 63996b5c7fe15d792e6a74d5323b008b9a762b52

release-1.18:b3b7240cc34e552273b26d8090d45e492474c9ea

release-1.17: 0053db03772a70c70de0516cc46f7ab363dc74f5

> CreateTableAsITCase.testCreateTableAsInStatementSet fails on AZP
> 
>
> Key: FLINK-32269
> URL: https://issues.apache.org/jira/browse/FLINK-32269
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Sergey Nuyanzin
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.19.0, 1.18.1, 1.17.3
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49532=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=15797
> {noformat}
> Jun 01 03:40:51 03:40:51.881 [ERROR] Tests run: 4, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 104.874 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.CreateTableAsITCase
> Jun 01 03:40:51 03:40:51.881 [ERROR] 
> CreateTableAsITCase.testCreateTableAsInStatementSet  Time elapsed: 40.729 s  
> <<< FAILURE!
> Jun 01 03:40:51 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.CreateTableAsITCase.testCreateTableAsInStatementSet(CreateTableAsITCase.java:50)
> Jun 01 03:40:51   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 01 03:40:51   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 01 03:40:51   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 01 03:40:51   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-32269) CreateTableAsITCase.testCreateTableAsInStatementSet fails on AZP

2023-11-28 Thread Leonard Xu (Jira)


[ https://issues.apache.org/jira/browse/FLINK-32269 ]


Leonard Xu deleted comment on FLINK-32269:


was (Author: leonard xu):
Fixed in master(1.19): 63996b5c7fe15d792e6a74d5323b008b9a762b52

> CreateTableAsITCase.testCreateTableAsInStatementSet fails on AZP
> 
>
> Key: FLINK-32269
> URL: https://issues.apache.org/jira/browse/FLINK-32269
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Sergey Nuyanzin
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.19.0, 1.18.1, 1.17.3
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49532=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=15797
> {noformat}
> Jun 01 03:40:51 03:40:51.881 [ERROR] Tests run: 4, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 104.874 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.CreateTableAsITCase
> Jun 01 03:40:51 03:40:51.881 [ERROR] 
> CreateTableAsITCase.testCreateTableAsInStatementSet  Time elapsed: 40.729 s  
> <<< FAILURE!
> Jun 01 03:40:51 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.CreateTableAsITCase.testCreateTableAsInStatementSet(CreateTableAsITCase.java:50)
> Jun 01 03:40:51   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 01 03:40:51   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 01 03:40:51   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 01 03:40:51   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-31033) UsingRemoteJarITCase.testUdfInRemoteJar failed with assertion

2023-11-28 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-31033.

Fix Version/s: 1.18.1
   1.17.3
   Resolution: Fixed

Fixed in

master(1.19): 63996b5c7fe15d792e6a74d5323b008b9a762b52

release-1.18:b3b7240cc34e552273b26d8090d45e492474c9ea

release-1.17: 0053db03772a70c70de0516cc46f7ab363dc74f5

> UsingRemoteJarITCase.testUdfInRemoteJar failed with assertion
> -
>
> Key: FLINK-31033
> URL: https://issues.apache.org/jira/browse/FLINK-31033
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.16.2, 1.18.0, 1.17.1
>Reporter: Matthias Pohl
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
> Fix For: 1.19.0, 1.18.1, 1.17.3
>
>
> {{UsingRemoteJarITCase.testUdfInRemoteJar}} failed with assertion:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46009=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=160c9ae5-96fd-516e-1c91-deb81f59292a=18050
> {code}
> Feb 10 15:28:15 [ERROR] Tests run: 10, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 249.499 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.UsingRemoteJarITCase
> Feb 10 15:28:15 [ERROR] UsingRemoteJarITCase.testUdfInRemoteJar  Time 
> elapsed: 40.786 s  <<< FAILURE!
> Feb 10 15:28:15 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Feb 10 15:28:15   at 
> org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55)
> Feb 10 15:28:15   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:40)
> Feb 10 15:28:15   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210)
> Feb 10 15:28:15   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Feb 10 15:28:15   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Feb 10 15:28:15   at 
> org.apache.flink.table.sql.codegen.UsingRemoteJarITCase.testUdfInRemoteJar(UsingRemoteJarITCase.java:106)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-32269) CreateTableAsITCase.testCreateTableAsInStatementSet fails on AZP

2023-11-28 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reopened FLINK-32269:


> CreateTableAsITCase.testCreateTableAsInStatementSet fails on AZP
> 
>
> Key: FLINK-32269
> URL: https://issues.apache.org/jira/browse/FLINK-32269
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Sergey Nuyanzin
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.19.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49532=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=15797
> {noformat}
> Jun 01 03:40:51 03:40:51.881 [ERROR] Tests run: 4, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 104.874 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.CreateTableAsITCase
> Jun 01 03:40:51 03:40:51.881 [ERROR] 
> CreateTableAsITCase.testCreateTableAsInStatementSet  Time elapsed: 40.729 s  
> <<< FAILURE!
> Jun 01 03:40:51 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.CreateTableAsITCase.testCreateTableAsInStatementSet(CreateTableAsITCase.java:50)
> Jun 01 03:40:51   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 01 03:40:51   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 01 03:40:51   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 01 03:40:51   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-31141) CreateTableAsITCase.testCreateTableAs fails

2023-11-28 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-31141.

Fix Version/s: 1.18.1
   1.17.3
   Resolution: Fixed

Fixed in

master(1.19): 63996b5c7fe15d792e6a74d5323b008b9a762b52

release-1.18:b3b7240cc34e552273b26d8090d45e492474c9ea

release-1.17: 0053db03772a70c70de0516cc46f7ab363dc74f5

> CreateTableAsITCase.testCreateTableAs fails
> ---
>
> Key: FLINK-31141
> URL: https://issues.apache.org/jira/browse/FLINK-31141
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.17.0, 1.18.0
>Reporter: Rui Fan
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: stale-assigned, test-stability
> Fix For: 1.19.0, 1.18.1, 1.17.3
>
>
> CreateTableAsITCase.testCreateTableAs fails in 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46323=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=160c9ae5-96fd-516e-1c91-deb81f59292a=14772]
>  
> {code:java}
> Feb 20 13:50:12 [ERROR] Failures: 
> Feb 20 13:50:12 [ERROR] CreateTableAsITCase.testCreateTableAs
> Feb 20 13:50:12 [ERROR]   Run 1: Did not get expected results before timeout, 
> actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Feb 20 13:50:12 [INFO]   Run 2: PASS
> Feb 20 13:50:12 [INFO] 
> Feb 20 13:50:12 [INFO] 
> Feb 20 13:50:12 [ERROR] Tests run: 15, Failures: 1, Errors: 0, Skipped: 0 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-31339) PlannerScalaFreeITCase.testImperativeUdaf

2023-11-28 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-31339.

Fix Version/s: 1.18.1
   1.17.3
   Resolution: Fixed

Fixed in

master(1.19): 63996b5c7fe15d792e6a74d5323b008b9a762b52

release-1.18:b3b7240cc34e552273b26d8090d45e492474c9ea

release-1.17: 0053db03772a70c70de0516cc46f7ab363dc74f5

> PlannerScalaFreeITCase.testImperativeUdaf
> -
>
> Key: FLINK-31339
> URL: https://issues.apache.org/jira/browse/FLINK-31339
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.2, 1.18.0, 1.17.1
>Reporter: Matthias Pohl
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
> Fix For: 1.19.0, 1.18.1, 1.17.3
>
>
> {{PlannerScalaFreeITCase.testImperativeUdaf}} failed:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46812=logs=87489130-75dc-54e4-1f45-80c30aa367a3=73da6d75-f30d-5d5a-acbe-487a9dcff678=15012
> {code}
> Mar 05 05:55:50 [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 62.028 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.PlannerScalaFreeITCase
> Mar 05 05:55:50 [ERROR] PlannerScalaFreeITCase.testImperativeUdaf  Time 
> elapsed: 40.924 s  <<< FAILURE!
> Mar 05 05:55:50 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
> Mar 05 05:55:50   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Mar 05 05:55:50   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Mar 05 05:55:50   at 
> org.apache.flink.table.sql.codegen.PlannerScalaFreeITCase.testImperativeUdaf(PlannerScalaFreeITCase.java:43)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-1.17][FLINK-31339][table][tests] Comparing with materialized result when mini-batch enabled to fix unstable sql e2e tests [flink]

2023-11-28 Thread via GitHub


leonardBang merged PR #23817:
URL: https://github.com/apache/flink/pull/23817


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33668) Decoupling Shuffle network memory and job topology

2023-11-28 Thread dalongliu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790844#comment-17790844
 ] 

dalongliu commented on FLINK-33668:
---

Big +1, there also has  a depulicated issue: 
https://issues.apache.org/jira/browse/FLINK-31643

> Decoupling Shuffle network memory and job topology
> --
>
> Key: FLINK-33668
> URL: https://issues.apache.org/jira/browse/FLINK-33668
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Jiang Xin
>Priority: Major
> Fix For: 1.19.0
>
>
> With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network 
> memory and the parallelism of tasks by limiting the number of buffers for 
> each InputGate and ResultPartition. However, when too many shuffle tasks are 
> running simultaneously on the same TaskManager, "Insufficient number of 
> network buffers" errors would still occur. This usually happens when Slot 
> Sharing Group is enabled or a TaskManager contains multiple slots.
> We want to make sure that the TaskManager does not encounter "Insufficient 
> number of network buffers" even if there are dozens of InputGates and 
> ResultPartitions running on the same TaskManager simultaneously.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-1.18][FLINK-31339][table][tests] Comparing with materialized result when mini-batch enabled to fix unstable sql e2e tests [flink]

2023-11-28 Thread via GitHub


leonardBang merged PR #23816:
URL: https://github.com/apache/flink/pull/23816


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32986][test] Fix createTemporaryFunction type inference error [flink]

2023-11-28 Thread via GitHub


jeyhunkarimov commented on PR #23586:
URL: https://github.com/apache/flink/pull/23586#issuecomment-1830956237

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33422] Move Calc restore tests [flink]

2023-11-28 Thread via GitHub


flinkbot commented on PR #23822:
URL: https://github.com/apache/flink/pull/23822#issuecomment-1830929772

   
   ## CI report:
   
   * 5f43ab687076b036a9f25d90aab8a9faae19cc5a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33422] Move Calc restore tests [flink]

2023-11-28 Thread via GitHub


bvarghese1 opened a new pull request, #23822:
URL: https://github.com/apache/flink/pull/23822

   
   
   ## What is the purpose of the change
   
   Move Calc restore tests to the right package 
`org.apache.flink.table.planner.plan.nodes.exec.stream`
   Related PR - https://github.com/apache/flink/pull/23623
   
   ## Verifying this change
   Tests are already present and running for Calc node
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33676) Implement restore tests for WindowAggregate node

2023-11-28 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33676:
--

 Summary: Implement restore tests for WindowAggregate node
 Key: FLINK-33676
 URL: https://issues.apache.org/jira/browse/FLINK-33676
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Add flink-shaded 18.0 release [flink-web]

2023-11-28 Thread via GitHub


snuyanzin opened a new pull request, #701:
URL: https://github.com/apache/flink-web/pull/701

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33667] Implement restore tests for MatchRecognize node [flink]

2023-11-28 Thread via GitHub


flinkbot commented on PR #23821:
URL: https://github.com/apache/flink/pull/23821#issuecomment-1830664311

   
   ## CI report:
   
   * 76d0fd68b9dd0891f54737580e72b46300169350 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33667) Implement restore tests for MatchRecognize node

2023-11-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33667:
---
Labels: pull-request-available  (was: )

> Implement restore tests for MatchRecognize node
> ---
>
> Key: FLINK-33667
> URL: https://issues.apache.org/jira/browse/FLINK-33667
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33667] Implement restore tests for MatchRecognize node [flink]

2023-11-28 Thread via GitHub


jnh5y opened a new pull request, #23821:
URL: https://github.com/apache/flink/pull/23821

   ## What is the purpose of the change
   
   Implement restore tests for MatchRecognize node
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   * Added restore tests for MatchRecognize node which verifies the generated 
compiled plan with the saved compiled plan
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33433][rest] Introduce async-profiler to support profiling Job… [flink]

2023-11-28 Thread via GitHub


flinkbot commented on PR #23820:
URL: https://github.com/apache/flink/pull/23820#issuecomment-1830313536

   
   ## CI report:
   
   * 48b2e2941d84938b7fecc74df5b110fe853835c4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33433) Support invoke async-profiler on Jobmanager through REST API

2023-11-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33433:
---
Labels: pull-request-available  (was: )

> Support invoke async-profiler on Jobmanager through REST API
> 
>
> Key: FLINK-33433
> URL: https://issues.apache.org/jira/browse/FLINK-33433
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST
>Affects Versions: 1.19.0
>Reporter: Yu Chen
>Assignee: Yu Chen
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33433][rest] Introduce async-profiler to support profiling Job… [flink]

2023-11-28 Thread via GitHub


yuchen-ecnu opened a new pull request, #23820:
URL: https://github.com/apache/flink/pull/23820

   …Manager via REST API
   
   ## What is the purpose of the change
   
   This is a subtask of 
[FLIP-375](https://cwiki.apache.org/confluence/x/64lEE), which introduces the 
async-profiler for profiling Jobmanager. 
   
   ## Brief change log
   - Include async-profiler dependency
   - Introduce APIs for Creating Profiling Instances / Downloading Profiling 
Results / Retrieving Profiling List 
   - Provide a web page for profiling jobmanager on  Flink WEB
   
   
   
   ## Verifying this change
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **yes** 
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no** 
 - The runtime per-record code paths (performance sensitive): **no** 
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: **no** 
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **yes**
 - If yes, how is the feature documented? not documented, it will be added 
in [FLINK-33436](https://issues.apache.org/jira/browse/FLINK-33436)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] improve comments in archunit.properties in common modules [flink]

2023-11-28 Thread via GitHub


echauchot commented on PR #23723:
URL: https://github.com/apache/flink/pull/23723#issuecomment-1830073846

   @zentol considering this PR is only comments in archunit conf, and it is the 
same content that [you already 
reviewed](https://github.com/apache/flink-connector-shared-utils/pull/23#pullrequestreview-1731890558).
 I don't really need a review of it, I'll self-merge


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33675) Serialize ValueLiteralExpressions into SQL

2023-11-28 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33675:


 Summary: Serialize ValueLiteralExpressions into SQL
 Key: FLINK-33675
 URL: https://issues.apache.org/jira/browse/FLINK-33675
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >