[jira] [Commented] (FLINK-29398) Utilize Rack Awareness in Flink Consumer

2022-09-23 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608582#comment-17608582
 ] 

Martijn Visser commented on FLINK-29398:


[~renqs] What do you think?

> Utilize Rack Awareness in Flink Consumer
> 
>
> Key: FLINK-29398
> URL: https://issues.apache.org/jira/browse/FLINK-29398
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy DeGroot
>Priority: Major
>
> [KIP-708|https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams]
>  was implemented some time ago in Kafka. This allows brokers and consumers to 
> communicate about the rack (or AWS Availability Zone) they're located in. 
> Reading from a local broker can save money in bandwidth and improve latency 
> for your consumers.
> Flink Kafka consumers currently cannot easily rack awareness if they're 
> deployed across multiple racks or availability zones, because they have no 
> control over which rack the Task Manager they'll be assigned to may be in. 
> This improvement proposes that a Kafka Consumer could be configured with a 
> callback or Future that could be run when it's being configured on the task 
> manager, that will set the appropriate value at runtime if a value is 
> provided. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] gyfora merged pull request #375: [FLINK-29327][operator] remove operator config from job runtime config before d…

2022-09-23 Thread GitBox


gyfora merged PR #375:
URL: https://github.com/apache/flink-kubernetes-operator/pull/375


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] zjureel commented on pull request #300: [FLINK-29276] BinaryInMemorySortBuffer supports to clear all memory segments

2022-09-23 Thread GitBox


zjureel commented on PR #300:
URL: 
https://github.com/apache/flink-table-store/pull/300#issuecomment-1255863925

   Hi @JingsongLi @SteNicholas I have updated the codes, thanks
   
   To @JingsongLi  I have tried to decrease memory buffers in 
`WritePreemptMemoryTest. createFileStoreTable`. Unfortunately, it causes 
`java.lang.RuntimeException: File deletion conflicts detected! Give up 
committing compact changes` which is the same as the master branch behavior, is 
it a known problem?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-29327) Operator configs are showing up among standard Flink configs

2022-09-23 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-29327.
--
Resolution: Fixed

merged to main 7f7b77cc434e6c2b67071404c5c0a8c0a28e36db

> Operator configs are showing up among standard Flink configs
> 
>
> Key: FLINK-29327
> URL: https://issues.apache.org/jira/browse/FLINK-29327
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Matyas Orhidi
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29382) Flink fails to start when created using quick guide for flink operator

2022-09-23 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-29382.
--
Resolution: Cannot Reproduce

> Flink fails to start when created using quick guide for flink operator
> --
>
> Key: FLINK-29382
> URL: https://issues.apache.org/jira/browse/FLINK-29382
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Barisa
>Priority: Major
>
> I followed 
> [https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/try-flink-kubernetes-operator/quick-start/]
>  to deploy flink operator and then the flink job.
>  
>  
> When following step 
>  {{kubectl create -f 
> https://raw.githubusercontent.com/apache/flink-kubernetes-operator/release-1.1/examples/basic.yaml}}
> the pod starts, but then it keeps crashing with following exception.
>  
> {noformat}
> Caused by: io.fabric8.kubernetes.client.KubernetesClientException: pods is 
> forbidden: User "system:anonymous" cannot watch resource "pods" in API group 
> "" in the namespace "zonda"
>   at 
> io.fabric8.kubernetes.client.dsl.internal.WatcherWebSocketListener.onFailure(WatcherWebSocketListener.java:74)
>  ~[flink-dist-1.15.2.jar:1.15.2]
>   at 
> org.apache.flink.kubernetes.shaded.okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:570)
>  ~[flink-dist-1.15.2.jar:1.15.2]
>   at 
> org.apache.flink.kubernetes.shaded.okhttp3.internal.ws.RealWebSocket$1.onResponse(RealWebSocket.java:199)
>  ~[flink-dist-1.15.2.jar:1.15.2]
>   at 
> org.apache.flink.kubernetes.shaded.okhttp3.RealCall$AsyncCall.execute(RealCall.java:174)
>  ~[flink-dist-1.15.2.jar:1.15.2]
>   at 
> org.apache.flink.kubernetes.shaded.okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
>  ~[flink-dist-1.15.2.jar:1.15.2]
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> ~[?:?]
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> ~[?:?]
> {noformat}
> I also noticed following log lines
> {noformat}
> 2022-09-21 13:32:05,715 WARN  io.fabric8.kubernetes.client.Config 
>  [] - Error reading service account token from: 
> [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
> 2022-09-21 13:32:05,719 WARN  io.fabric8.kubernetes.client.Config 
>  [] - Error reading service account token from: 
> [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
> {noformat}
> I think the problem is that container runs as user root, which later uses 
> gosu to became flink user. However, service account is only accessible to the 
> main user in the container, which is root
> {noformat}
> root@basic-example-658578895d-qwlb2:/opt/flink# ls -hltr 
> /var/run/secrets/kubernetes.io/serviceaccount/token
> lrwxrwxrwx. 1 root 1337 12 Sep 21 08:57 
> /var/run/secrets/kubernetes.io/serviceaccount/token -> ..data/token
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] fsk119 commented on pull request #20887: [FLINK-29229][hive] Fix ObjectStore leak when different users has dif…

2022-09-23 Thread GitBox


fsk119 commented on PR #20887:
URL: https://github.com/apache/flink/pull/20887#issuecomment-1255870285

   Thanks for your review. Because the comment is not releated to the code, I 
will this offline.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] fsk119 commented on pull request #20887: [FLINK-29229][hive] Fix ObjectStore leak when different users has dif…

2022-09-23 Thread GitBox


fsk119 commented on PR #20887:
URL: https://github.com/apache/flink/pull/20887#issuecomment-1255870612

   The failed test is not releated to the fix. Merging...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29387) IntervalJoinITCase.testIntervalJoinSideOutputRightLateData failed with AssertionError

2022-09-23 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608591#comment-17608591
 ] 

Huang Xingbo commented on FLINK-29387:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41276&view=logs&j=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3&t=0c010d0c-3dec-5bf1-d408-7b18988b1b2b&l=8100

> IntervalJoinITCase.testIntervalJoinSideOutputRightLateData failed with 
> AssertionError
> -
>
> Key: FLINK-29387
> URL: https://issues.apache.org/jira/browse/FLINK-29387
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.17.0
>Reporter: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-22T04:40:21.9296331Z Sep 22 04:40:21 [ERROR] 
> org.apache.flink.test.streaming.runtime.IntervalJoinITCase.testIntervalJoinSideOutputRightLateData
>   Time elapsed: 2.46 s  <<< FAILURE!
> 2022-09-22T04:40:21.9297487Z Sep 22 04:40:21 java.lang.AssertionError: 
> expected:<[(key,2)]> but was:<[]>
> 2022-09-22T04:40:21.9298208Z Sep 22 04:40:21  at 
> org.junit.Assert.fail(Assert.java:89)
> 2022-09-22T04:40:21.9298927Z Sep 22 04:40:21  at 
> org.junit.Assert.failNotEquals(Assert.java:835)
> 2022-09-22T04:40:21.9299655Z Sep 22 04:40:21  at 
> org.junit.Assert.assertEquals(Assert.java:120)
> 2022-09-22T04:40:21.9300403Z Sep 22 04:40:21  at 
> org.junit.Assert.assertEquals(Assert.java:146)
> 2022-09-22T04:40:21.9301538Z Sep 22 04:40:21  at 
> org.apache.flink.test.streaming.runtime.IntervalJoinITCase.expectInAnyOrder(IntervalJoinITCase.java:521)
> 2022-09-22T04:40:21.9302578Z Sep 22 04:40:21  at 
> org.apache.flink.test.streaming.runtime.IntervalJoinITCase.testIntervalJoinSideOutputRightLateData(IntervalJoinITCase.java:280)
> 2022-09-22T04:40:21.9303641Z Sep 22 04:40:21  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-22T04:40:21.9304472Z Sep 22 04:40:21  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-22T04:40:21.9305371Z Sep 22 04:40:21  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-22T04:40:21.9306195Z Sep 22 04:40:21  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-22T04:40:21.9307011Z Sep 22 04:40:21  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-09-22T04:40:21.9308077Z Sep 22 04:40:21  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-09-22T04:40:21.9308968Z Sep 22 04:40:21  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-09-22T04:40:21.9309849Z Sep 22 04:40:21  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-09-22T04:40:21.9310704Z Sep 22 04:40:21  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-09-22T04:40:21.9311533Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-22T04:40:21.9312386Z Sep 22 04:40:21  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-09-22T04:40:21.9313231Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-09-22T04:40:21.9314985Z Sep 22 04:40:21  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-09-22T04:40:21.9315857Z Sep 22 04:40:21  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-09-22T04:40:21.9316633Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-09-22T04:40:21.9317450Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-09-22T04:40:21.9318209Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-09-22T04:40:21.9318949Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-09-22T04:40:21.9319680Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-09-22T04:40:21.9320401Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-22T04:40:21.9321130Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-09-22T04:40:21.9321822Z Sep 22 04:40:21  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-09-22T04:40:21.9322498Z Sep 22 04:40:21  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-09-22T04:40:21.9323248Z Sep 22 04:40:21  at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> 2022

[jira] [Commented] (FLINK-29387) IntervalJoinITCase.testIntervalJoinSideOutputRightLateData failed with AssertionError

2022-09-23 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608593#comment-17608593
 ] 

Huang Xingbo commented on FLINK-29387:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41276&view=logs&j=2c3cbe13-dee0-5837-cf47-3053da9a8a78&t=b78d9d30-509a-5cea-1fef-db7abaa325ae

> IntervalJoinITCase.testIntervalJoinSideOutputRightLateData failed with 
> AssertionError
> -
>
> Key: FLINK-29387
> URL: https://issues.apache.org/jira/browse/FLINK-29387
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.17.0
>Reporter: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-22T04:40:21.9296331Z Sep 22 04:40:21 [ERROR] 
> org.apache.flink.test.streaming.runtime.IntervalJoinITCase.testIntervalJoinSideOutputRightLateData
>   Time elapsed: 2.46 s  <<< FAILURE!
> 2022-09-22T04:40:21.9297487Z Sep 22 04:40:21 java.lang.AssertionError: 
> expected:<[(key,2)]> but was:<[]>
> 2022-09-22T04:40:21.9298208Z Sep 22 04:40:21  at 
> org.junit.Assert.fail(Assert.java:89)
> 2022-09-22T04:40:21.9298927Z Sep 22 04:40:21  at 
> org.junit.Assert.failNotEquals(Assert.java:835)
> 2022-09-22T04:40:21.9299655Z Sep 22 04:40:21  at 
> org.junit.Assert.assertEquals(Assert.java:120)
> 2022-09-22T04:40:21.9300403Z Sep 22 04:40:21  at 
> org.junit.Assert.assertEquals(Assert.java:146)
> 2022-09-22T04:40:21.9301538Z Sep 22 04:40:21  at 
> org.apache.flink.test.streaming.runtime.IntervalJoinITCase.expectInAnyOrder(IntervalJoinITCase.java:521)
> 2022-09-22T04:40:21.9302578Z Sep 22 04:40:21  at 
> org.apache.flink.test.streaming.runtime.IntervalJoinITCase.testIntervalJoinSideOutputRightLateData(IntervalJoinITCase.java:280)
> 2022-09-22T04:40:21.9303641Z Sep 22 04:40:21  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-22T04:40:21.9304472Z Sep 22 04:40:21  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-22T04:40:21.9305371Z Sep 22 04:40:21  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-22T04:40:21.9306195Z Sep 22 04:40:21  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-22T04:40:21.9307011Z Sep 22 04:40:21  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-09-22T04:40:21.9308077Z Sep 22 04:40:21  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-09-22T04:40:21.9308968Z Sep 22 04:40:21  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-09-22T04:40:21.9309849Z Sep 22 04:40:21  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-09-22T04:40:21.9310704Z Sep 22 04:40:21  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-09-22T04:40:21.9311533Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-22T04:40:21.9312386Z Sep 22 04:40:21  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-09-22T04:40:21.9313231Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-09-22T04:40:21.9314985Z Sep 22 04:40:21  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-09-22T04:40:21.9315857Z Sep 22 04:40:21  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-09-22T04:40:21.9316633Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-09-22T04:40:21.9317450Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-09-22T04:40:21.9318209Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-09-22T04:40:21.9318949Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-09-22T04:40:21.9319680Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-09-22T04:40:21.9320401Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-22T04:40:21.9321130Z Sep 22 04:40:21  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-09-22T04:40:21.9321822Z Sep 22 04:40:21  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-09-22T04:40:21.9322498Z Sep 22 04:40:21  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-09-22T04:40:21.9323248Z Sep 22 04:40:21  at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> 2022-09-22T

[GitHub] [flink] fsk119 closed pull request #20887: [FLINK-29274][hive] Fix ObjectStore leak when different users has dif…

2022-09-23 Thread GitBox


fsk119 closed pull request #20887: [FLINK-29274][hive] Fix ObjectStore leak 
when different users has dif…
URL: https://github.com/apache/flink/pull/20887


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29274) HiveServer2EndpointITCase.testGetFunctionWithPattern failed with Persistence Manager has been closed

2022-09-23 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608598#comment-17608598
 ] 

Shengkai Fang commented on FLINK-29274:
---

Merged into release-1.16: c653d162f224b8b938e32323a636ae07119f0c11

> HiveServer2EndpointITCase.testGetFunctionWithPattern failed with Persistence 
> Manager has been closed
> 
>
> Key: FLINK-29274
> URL: https://issues.apache.org/jira/browse/FLINK-29274
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: Xingbo Huang
>Assignee: yuzelin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> {code:java}
> 4.6807800Z Sep 13 02:07:54 [ERROR] 
> org.apache.flink.table.endpoint.hive.HiveServer2EndpointITCase.testGetFunctionWithPattern
>   Time elapsed: 22.127 s  <<< ERROR!
> 2022-09-13T02:07:54.6813586Z Sep 13 02:07:54 java.sql.SQLException: 
> javax.jdo.JDOFatalUserException: Persistence Manager has been closed
> 2022-09-13T02:07:54.6815315Z Sep 13 02:07:54  at 
> org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:401)
> 2022-09-13T02:07:54.6816917Z Sep 13 02:07:54  at 
> org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:266)
> 2022-09-13T02:07:54.6818338Z Sep 13 02:07:54  at 
> org.apache.flink.table.endpoint.hive.HiveServer2EndpointITCase.lambda$testGetFunctionWithPattern$29(HiveServer2EndpointITCase.java:542)
> 2022-09-13T02:07:54.6819988Z Sep 13 02:07:54  at 
> org.apache.flink.table.endpoint.hive.HiveServer2EndpointITCase.runGetObjectTest(HiveServer2EndpointITCase.java:633)
> 2022-09-13T02:07:54.6821484Z Sep 13 02:07:54  at 
> org.apache.flink.table.endpoint.hive.HiveServer2EndpointITCase.runGetObjectTest(HiveServer2EndpointITCase.java:621)
> 2022-09-13T02:07:54.6823318Z Sep 13 02:07:54  at 
> org.apache.flink.table.endpoint.hive.HiveServer2EndpointITCase.testGetFunctionWithPattern(HiveServer2EndpointITCase.java:539)
> 2022-09-13T02:07:54.6824711Z Sep 13 02:07:54  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-13T02:07:54.6825817Z Sep 13 02:07:54  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-13T02:07:54.6827003Z Sep 13 02:07:54  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-13T02:07:54.6828259Z Sep 13 02:07:54  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-13T02:07:54.6829478Z Sep 13 02:07:54  at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
> 2022-09-13T02:07:54.6830717Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> 2022-09-13T02:07:54.6832444Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> 2022-09-13T02:07:54.6834028Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> 2022-09-13T02:07:54.6835304Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> 2022-09-13T02:07:54.6836734Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
> 2022-09-13T02:07:54.6838257Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> 2022-09-13T02:07:54.6839775Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> 2022-09-13T02:07:54.6841400Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> 2022-09-13T02:07:54.6843309Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> 2022-09-13T02:07:54.6845300Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> 2022-09-13T02:07:54.6846879Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> 2022-09-13T02:07:54.6848406Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
> 2022-09-13T02:07:54.6849760Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
> 2022-09-13T02:07:54.6851297Z Sep 13 02:07:54 

[jira] [Assigned] (FLINK-29274) HiveServer2EndpointITCase.testGetFunctionWithPattern failed with Persistence Manager has been closed

2022-09-23 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-29274:
-

Assignee: Shengkai Fang  (was: yuzelin)

> HiveServer2EndpointITCase.testGetFunctionWithPattern failed with Persistence 
> Manager has been closed
> 
>
> Key: FLINK-29274
> URL: https://issues.apache.org/jira/browse/FLINK-29274
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0
>Reporter: Xingbo Huang
>Assignee: Shengkai Fang
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> {code:java}
> 4.6807800Z Sep 13 02:07:54 [ERROR] 
> org.apache.flink.table.endpoint.hive.HiveServer2EndpointITCase.testGetFunctionWithPattern
>   Time elapsed: 22.127 s  <<< ERROR!
> 2022-09-13T02:07:54.6813586Z Sep 13 02:07:54 java.sql.SQLException: 
> javax.jdo.JDOFatalUserException: Persistence Manager has been closed
> 2022-09-13T02:07:54.6815315Z Sep 13 02:07:54  at 
> org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:401)
> 2022-09-13T02:07:54.6816917Z Sep 13 02:07:54  at 
> org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:266)
> 2022-09-13T02:07:54.6818338Z Sep 13 02:07:54  at 
> org.apache.flink.table.endpoint.hive.HiveServer2EndpointITCase.lambda$testGetFunctionWithPattern$29(HiveServer2EndpointITCase.java:542)
> 2022-09-13T02:07:54.6819988Z Sep 13 02:07:54  at 
> org.apache.flink.table.endpoint.hive.HiveServer2EndpointITCase.runGetObjectTest(HiveServer2EndpointITCase.java:633)
> 2022-09-13T02:07:54.6821484Z Sep 13 02:07:54  at 
> org.apache.flink.table.endpoint.hive.HiveServer2EndpointITCase.runGetObjectTest(HiveServer2EndpointITCase.java:621)
> 2022-09-13T02:07:54.6823318Z Sep 13 02:07:54  at 
> org.apache.flink.table.endpoint.hive.HiveServer2EndpointITCase.testGetFunctionWithPattern(HiveServer2EndpointITCase.java:539)
> 2022-09-13T02:07:54.6824711Z Sep 13 02:07:54  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-13T02:07:54.6825817Z Sep 13 02:07:54  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-13T02:07:54.6827003Z Sep 13 02:07:54  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-13T02:07:54.6828259Z Sep 13 02:07:54  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-13T02:07:54.6829478Z Sep 13 02:07:54  at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
> 2022-09-13T02:07:54.6830717Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> 2022-09-13T02:07:54.6832444Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> 2022-09-13T02:07:54.6834028Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> 2022-09-13T02:07:54.6835304Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> 2022-09-13T02:07:54.6836734Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
> 2022-09-13T02:07:54.6838257Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> 2022-09-13T02:07:54.6839775Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> 2022-09-13T02:07:54.6841400Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> 2022-09-13T02:07:54.6843309Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> 2022-09-13T02:07:54.6845300Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> 2022-09-13T02:07:54.6846879Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> 2022-09-13T02:07:54.6848406Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
> 2022-09-13T02:07:54.6849760Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
> 2022-09-13T02:07:54.6851297Z Sep 13 02:07:54  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.

[jira] [Commented] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-23 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608605#comment-17608605
 ] 

Chesnay Schepler commented on FLINK-29315:
--

I'm -1 on adding some bespoke {{ls}} implementation to the system; god knows 
whether it hides something else down the line.

Let's disable the test and I'll try a few more things.

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22   ... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22   java.lang.NullPointerException: 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] fsk119 opened a new pull request, #20891: [FLINK-29274][hive] Fix ObjectStore leak when different users has dif…

2022-09-23 Thread GitBox


fsk119 opened a new pull request, #20891:
URL: https://github.com/apache/flink/pull/20891

   …ferent config
   
   
   
   ## What is the purpose of the change
   
   Cheery pick the fix #20887 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20891: [FLINK-29274][hive] Fix ObjectStore leak when different users has dif…

2022-09-23 Thread GitBox


flinkbot commented on PR #20891:
URL: https://github.com/apache/flink/pull/20891#issuecomment-1255894455

   
   ## CI report:
   
   * 395f8780fc90b8963c9425c53309a7a19d0d77f5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #20870: [FLINK-29375][rpc] Move getSelfGateway() into RpcService

2022-09-23 Thread GitBox


XComp commented on PR #20870:
URL: https://github.com/apache/flink/pull/20870#issuecomment-1255916874

   I moved the diagrams into it's own repository under `XComp/flink-diagrams` 
(with Github Actions support on the SVG generation). Feel free to propose a 
change there if you're still keen to enlighten me with your change on the 
[Flink RPC System 
diagram](https://github.com/XComp/flink-diagrams/blob/main/plantuml/flink-rpc-system.puml)
 :sunglasses: 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] gaborgsomogyi commented on a diff in pull request #20875: [FLINK-29363][runtime-web] allow fully redirection in web dashboard

2022-09-23 Thread GitBox


gaborgsomogyi commented on code in PR #20875:
URL: https://github.com/apache/flink/pull/20875#discussion_r978376452


##
flink-runtime-web/web-dashboard/src/app/app.interceptor.ts:
##
@@ -39,6 +46,16 @@ export class AppInterceptor implements HttpInterceptor {
 
 return next.handle(req.clone({ withCredentials: true })).pipe(
   catchError(res => {
+if (
+  res instanceof HttpResponseBase &&
+  (res.status == HttpStatusCode.MovedPermanently ||
+res.status == HttpStatusCode.TemporaryRedirect ||
+res.status == HttpStatusCode.SeeOther) &&

Review Comment:
   I was thinking about to add `302 Found` but that's optional which is denoted 
w/ the MAY keyword in the documentation:
   https://user-images.githubusercontent.com/18561820/191918050-46b1acd5-9c0e-4dc4-ba8a-0928eb706800.png";>
   It's good as-is from my perspective.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] gaborgsomogyi commented on pull request #20875: [FLINK-29363][runtime-web] allow fully redirection in web dashboard

2022-09-23 Thread GitBox


gaborgsomogyi commented on PR #20875:
URL: https://github.com/apache/flink/pull/20875#issuecomment-1255919386

   I presume a rebase to the latest master is needed since second time the 
following error arrived:
   ```
   Sep 22 08:19:47  Suppressed: java.lang.AssertionError: Test failed Error 
while running command to get file permissions : java.io.IOException: Cannot run 
program "ls": error=1, Operation not permitted
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28786) Cannot run PyFlink 1.16 on MacOS with M1 chip

2022-09-23 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608622#comment-17608622
 ] 

Dian Fu commented on FLINK-28786:
-

[~lemonjing] Hi, could you provide more information? Otherwise, I'm tending to 
close this issue as "Cannot Produced"

> Cannot run PyFlink 1.16 on MacOS with M1 chip
> -
>
> Key: FLINK-28786
> URL: https://issues.apache.org/jira/browse/FLINK-28786
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0
>Reporter: Ran Tao
>Priority: Major
>
> I have tested it with 2 m1 machines. i will reproduce the bug 100%.
> 1.m1 machine
> macos bigsur 11.5.1 & jdk8 * & jdk11 & python 3.8 & python 3.9
> 1.m1 machine
> macos monterey 12.1 & jdk8 * & jdk11 & python 3.8 & python 3.9
> reproduce step:
> 1.python -m pip install -r flink-python/dev/dev-requirements.txt
> 2.cd flink-python; python setup.py sdist bdist_wheel; cd 
> apache-flink-libraries; python setup.py sdist; cd ..;
> 3.python -m pip install apache-flink-libraries/dist/*.tar.gz
> 4.python -m pip install dist/*.whl
> when run 
> [word_count.py|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/python/table_api_tutorial/]
>  it will cause
> {code:java}
> :219: RuntimeWarning: 
> apache_beam.coders.coder_impl.StreamCoderImpl size changed, may indicate 
> binary incompatibility. Expected 24 from C header, got 32 from PyObject
> Traceback (most recent call last):
>   File "/Users/chucheng/GitLab/pyflink-demo/table/streaming/word_count.py", 
> line 129, in 
> word_count(known_args.input, known_args.output)
>   File "/Users/chucheng/GitLab/pyflink-demo/table/streaming/word_count.py", 
> line 49, in word_count
> t_env = TableEnvironment.create(EnvironmentSettings.in_streaming_mode())
>   File 
> "/Users/chucheng/venv/lib/python3.8/site-packages/pyflink/table/table_environment.py",
>  line 121, in create
> return TableEnvironment(j_tenv)
>   File 
> "/Users/chucheng/venv/lib/python3.8/site-packages/pyflink/table/table_environment.py",
>  line 100, in __init__
> self._open()
>   File 
> "/Users/chucheng/venv/lib/python3.8/site-packages/pyflink/table/table_environment.py",
>  line 1637, in _open
> startup_loopback_server()
>   File 
> "/Users/chucheng/venv/lib/python3.8/site-packages/pyflink/table/table_environment.py",
>  line 1628, in startup_loopback_server
> from pyflink.fn_execution.beam.beam_worker_pool_service import \
>   File 
> "/Users/chucheng/venv/lib/python3.8/site-packages/pyflink/fn_execution/beam/beam_worker_pool_service.py",
>  line 44, in 
> from pyflink.fn_execution.beam import beam_sdk_worker_main  # noqa # 
> pylint: disable=unused-import
>   File 
> "/Users/chucheng/venv/lib/python3.8/site-packages/pyflink/fn_execution/beam/beam_sdk_worker_main.py",
>  line 21, in 
> import pyflink.fn_execution.beam.beam_operations # noqa # pylint: 
> disable=unused-import
>   File 
> "/Users/chucheng/venv/lib/python3.8/site-packages/pyflink/fn_execution/beam/beam_operations.py",
>  line 27, in 
> from pyflink.fn_execution.state_impl import RemoteKeyedStateBackend, 
> RemoteOperatorStateBackend
>   File 
> "/Users/chucheng/venv/lib/python3.8/site-packages/pyflink/fn_execution/state_impl.py",
>  line 33, in 
> from pyflink.fn_execution.beam.beam_coders import FlinkCoder
>   File 
> "/Users/chucheng/venv/lib/python3.8/site-packages/pyflink/fn_execution/beam/beam_coders.py",
>  line 27, in 
> from pyflink.fn_execution.beam import beam_coder_impl_fast as 
> beam_coder_impl
>   File "pyflink/fn_execution/beam/beam_coder_impl_fast.pyx", line 1, in init 
> pyflink.fn_execution.beam.beam_coder_impl_fast
> KeyError: '__pyx_vtable__'
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-ml] lindong28 commented on a diff in pull request #157: [FLINK-28906] Support windowing in AgglomerativeClustering

2022-09-23 Thread GitBox


lindong28 commented on code in PR #157:
URL: https://github.com/apache/flink-ml/pull/157#discussion_r978349873


##
flink-ml-python/pyflink/ml/lib/clustering/tests/test_agglomerativeclustering.py:
##
@@ -157,6 +163,30 @@ def test_transform(self):
 
self.verify_clustering_result(self.eucliean_ward_threshold_as_two_result,
   outputs[0], "features", "pred")
 
+def test_transform_with_count_window(self):
+input_table = self.t_env.from_data_stream(
+self.env.from_collection([
+(Vectors.dense([1, 1]),),
+(Vectors.dense([1, 4]),),
+(Vectors.dense([1, 0]),),
+(Vectors.dense([4, 1.5]),),
+(Vectors.dense([4, 4]),),
+(Vectors.dense([4, 0]),),
+],
+type_info=Types.ROW_NAMED(
+['features'],
+[DenseVectorTypeInfo()])))
+
+agglomerative_clustering = AgglomerativeClustering() \
+.set_linkage('average') \
+.set_distance_measure('euclidean') \
+.set_prediction_col('pred') \
+.set_windows(CountTumblingWindows.of(6))

Review Comment:
   Can we use `CountTumblingWindows.of(5)` so that the input stream (which 
contains 6 elements) can be split into two windows? Having all elements in the 
same window seems a bit trivial.



##
flink-ml-python/pyflink/ml/core/windows.py:
##
@@ -0,0 +1,151 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+from abc import ABC
+
+from pyflink.common.time import Time
+
+
+class Windows(ABC):
+"""
+Windowing strategy that determines how to create mini-batches from input 
data.
+"""
+pass
+
+
+class GlobalWindows(Windows):
+"""
+A Windows that assigns all the elements into a single global window.
+In order for this windowing strategy to work correctly, the input
+stream must be bounded.
+"""
+
+def __eq__(self, other):
+return isinstance(other, GlobalWindows)
+
+
+class CountTumblingWindows(Windows):
+"""
+A Windows that groups elements into windows of fixed number of
+elements. Windows do not overlap.
+"""
+
+def __init__(self, size: int):
+super().__init__()
+self._size = size
+
+@staticmethod
+def of(size: int) -> 'CountTumblingWindows':
+return CountTumblingWindows(size)
+
+@property
+def size(self) -> int:
+return self._size
+
+def __eq__(self, other):
+return isinstance(other, CountTumblingWindows) and self._size == 
other._size
+
+
+class EventTimeTumblingWindows(Windows):
+"""
+A Windows that groups elements into fixed-size windows based on
+the timestamp of the elements. Windows do not overlap.
+"""
+
+def __init__(self, size: Time):
+super().__init__()
+self._size = size
+
+@staticmethod
+def of(size: Time) -> 'EventTimeTumblingWindows':
+return EventTimeTumblingWindows(size)
+
+@property
+def size(self) -> Time:
+return self._size
+
+def __eq__(self, other):
+return isinstance(other, EventTimeTumblingWindows) and self._size == 
other._size
+
+
+class ProcessingTimeTumblingWindows(Windows):
+"""
+A Windows that groups elements into fixed-size windows based on
+the current system time of the machine the operation is running
+on. Windows do not overlap.
+"""
+
+def __init__(self, size: Time):
+super().__init__()
+self._size = size
+
+@staticmethod
+def of(size: Time) -> 'ProcessingTimeTumblingWindows':
+return ProcessingTimeTumblingWindows(size)
+
+@property
+def size(self) -> Time:
+return self._size
+
+def __eq__(self, other):
+return isinstance(other, ProcessingTimeTumblingWindows) and self._size 
== other._size
+
+
+class EventTimeSessionWindows(Windows):
+"""
+A Windows that windows elements into sessions based on the
+timestamp of the elements. Windows do not 

[GitHub] [flink] afedulov commented on pull request #20865: [FLINK-14896][connectors/kinesis] Shade and relocate Jackson dependen…

2022-09-23 Thread GitBox


afedulov commented on PR #20865:
URL: https://github.com/apache/flink/pull/20865#issuecomment-1255953839

   @flinkbot run azure
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dannycranmer commented on a diff in pull request #20876: [hotfix][docs] Fix typos in Kinesis Connector docs.

2022-09-23 Thread GitBox


dannycranmer commented on code in PR #20876:
URL: https://github.com/apache/flink/pull/20876#discussion_r978417586


##
docs/content.zh/docs/connectors/table/kinesis.md:
##
@@ -776,7 +776,7 @@ You can enable and configure EFO with the following 
properties:
 This is the preferred strategy for the majority of applications.
 However, jobs with parallelism greater than 1 will result in tasks 
competing to register and acquire the stream consumer ARN.
 For jobs with very large parallelism this can result in an increased 
start-up time.
-The describe operation has a limit of 20 [transactions per 
second](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_DescribeStreamConsumer.html),

Review Comment:
   I think this is intentional, the API is called `DescribeStreamConsumer`, 
could be more explicit and change to 
   
   > The `DescribeStreamConsumer` operation 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-29281) Replace Akka

2022-09-23 Thread Claude Warren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17607600#comment-17607600
 ] 

Claude Warren edited comment on FLINK-29281 at 9/23/22 9:07 AM:


I am working with a group of developers in an attempt to get an Akka fork 
accepted as an Apache Project.  If anyone here is interested in participating 
in that project please reach out to me at  cla...@apache.org

The repository that we are working from is at 
https://github.com/mdedetrich/akka-apache We are looking for people who want to 
participate as we start into the Apache incubator proecss


was (Author: claudenw):
I am working with a group of developers in an attempt to get an Akka fork 
accepted as an Apache Project.  If anyone here is interested in participating 
in that project please reach out to me at  cla...@apache.org

> Replace Akka
> 
>
> Key: FLINK-29281
> URL: https://issues.apache.org/jira/browse/FLINK-29281
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / RPC
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>
> Following the license change I propose to eventually replace Akka.
> Based on LEGAL-619 an exemption is not feasible, and while a fork _may_ be 
> created it's long-term future is up in the air and I'd be uncomfortable with 
> relying on it.
> I've been experimenting with a new RPC implementation based on gRPC and so 
> far I'm quite optimistic. It's also based on Netty while not requiring as 
> much of a tight coupling as Akka did.
> This would also allow us to sidestep migrating our current Akka setup from 
> Netty 3 (which is affected by several CVEs) to Akka Artery, both saving work 
> and not introducing an entirely different network stack to the project.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24296) [Umbrella] Refactor Google PubSub to new Source/Sink interfaces

2022-09-23 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-24296:
---
Component/s: Connectors / Google Cloud PubSub

> [Umbrella] Refactor Google PubSub to new Source/Sink interfaces
> ---
>
> Key: FLINK-24296
> URL: https://issues.apache.org/jira/browse/FLINK-24296
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Google Cloud PubSub
>Reporter: Martijn Visser
>Priority: Major
>
> The current Google PubSub connector uses the SourceFunction and SinkFunction, 
> which are being deprecated. The connector should be refactored to use the new 
> Source API (FLIP-27) and Unified Sink API (FLIP-143)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24296) [Umbrella] Refactor Google PubSub to new Source/Sink interfaces

2022-09-23 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-24296:
---
Summary: [Umbrella] Refactor Google PubSub to new Source/Sink interfaces  
(was: Refactor Google PubSub to new Source/Sink interfaces)

> [Umbrella] Refactor Google PubSub to new Source/Sink interfaces
> ---
>
> Key: FLINK-24296
> URL: https://issues.apache.org/jira/browse/FLINK-24296
> Project: Flink
>  Issue Type: Improvement
>Reporter: Martijn Visser
>Priority: Major
>
> The current Google PubSub connector uses the SourceFunction and SinkFunction, 
> which are being deprecated. The connector should be refactored to use the new 
> Source API (FLIP-27) and Unified Sink API (FLIP-143)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] lsyldliu commented on a diff in pull request #20884: [FLINK-29389][docs] Update documentation of JDBC and HBase lookup table for new caching options

2022-09-23 Thread GitBox


lsyldliu commented on code in PR #20884:
URL: https://github.com/apache/flink/pull/20884#discussion_r978427558


##
docs/content.zh/docs/connectors/table/jdbc.md:
##
@@ -263,6 +279,47 @@ ON myTopic.key = MyUserTable.id;
 
 
 
+### 已弃用的配置
+这些弃用配置已经被上述的新配置代替,而且最终会被弃用。请优先考虑使用新配置。
+
+
+  
+Option
+Required
+Forwarded
+Default
+Type
+Description
+  
+
+
+
+  lookup.cache.max-rows
+  optional
+  yes
+  (none)
+  Integer
+  请配置 "lookup.cache" = "PARTIAL" 并使用 
"lookup.partial-cache.max-rows" 作为替代

Review Comment:
   Remove the `作为`?`替代` -> `代替`



##
docs/content.zh/docs/connectors/table/jdbc.md:
##
@@ -202,28 +202,44 @@ ON myTopic.key = MyUserTable.id;
   它决定了每个语句是否在事务中自动提交。有些 JDBC 驱动程序,特别是
   https://jdbc.postgresql.org/documentation/head/query.html#query-with-cursor";>Postgres,可能需要将此设置为
 false 以便流化结果。
 
+
+  lookup.cache
+  可选
+  NONE
+  枚举类型可选值: NONE, PARTIAL
+  维表的缓存策略。 目前支持 NONE(不缓存)和 PARTIAL(伴随在外部数据库中查找数据的过程缓存)。

Review Comment:
   伴随在外部数据库中查找数据的过程缓存,感觉这个翻译有点奇怪?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol merged pull request #20870: [FLINK-29375][rpc] Move getSelfGateway() into RpcService

2022-09-23 Thread GitBox


zentol merged PR #20870:
URL: https://github.com/apache/flink/pull/20870


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-29375) Move getSelfGateway into RpcService

2022-09-23 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-29375.

Resolution: Fixed

master: 0154de9edb44b565e9b9cba614f42e7adae3a953

> Move getSelfGateway into RpcService
> ---
>
> Key: FLINK-29375
> URL: https://issues.apache.org/jira/browse/FLINK-29375
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / RPC
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> Self gateways are a tricky thing and we should give the RPC implementation 
> control over how they are achieved.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Tartarus0zm commented on pull request #20653: [FLINK-29020][docs] add document for CTAS feature

2022-09-23 Thread GitBox


Tartarus0zm commented on PR #20653:
URL: https://github.com/apache/flink/pull/20653#issuecomment-1256004961

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] weibozhao commented on a diff in pull request #156: [FLINK-29323] Refine Transformer for VectorAssembler

2022-09-23 Thread GitBox


weibozhao commented on code in PR #156:
URL: https://github.com/apache/flink-ml/pull/156#discussion_r978475159


##
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/vectorassembler/VectorAssemblerParams.java:
##
@@ -21,11 +21,31 @@
 import org.apache.flink.ml.common.param.HasHandleInvalid;
 import org.apache.flink.ml.common.param.HasInputCols;
 import org.apache.flink.ml.common.param.HasOutputCol;
+import org.apache.flink.ml.param.IntArrayParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+
+import java.util.Arrays;
 
 /**
  * Params of {@link VectorAssembler}.
  *
  * @param  The class type of this instance.
  */
 public interface VectorAssemblerParams
-extends HasInputCols, HasOutputCol, HasHandleInvalid {}
+extends HasInputCols, HasOutputCol, HasHandleInvalid {
+Param SIZES =

Review Comment:
   Splits is an array, splitsArray is a `double[][]`.  VectorSizeHint has no 
parameter like splitsArray.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] weibozhao commented on a diff in pull request #156: [FLINK-29323] Refine Transformer for VectorAssembler

2022-09-23 Thread GitBox


weibozhao commented on code in PR #156:
URL: https://github.com/apache/flink-ml/pull/156#discussion_r978475159


##
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/vectorassembler/VectorAssemblerParams.java:
##
@@ -21,11 +21,31 @@
 import org.apache.flink.ml.common.param.HasHandleInvalid;
 import org.apache.flink.ml.common.param.HasInputCols;
 import org.apache.flink.ml.common.param.HasOutputCol;
+import org.apache.flink.ml.param.IntArrayParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+
+import java.util.Arrays;
 
 /**
  * Params of {@link VectorAssembler}.
  *
  * @param  The class type of this instance.
  */
 public interface VectorAssemblerParams
-extends HasInputCols, HasOutputCol, HasHandleInvalid {}
+extends HasInputCols, HasOutputCol, HasHandleInvalid {
+Param SIZES =

Review Comment:
   Splits is an array, splitsArray is a `double[][]`.  VectorSizeHint has no 
parameter like splitsArray.  
   I think Sizes is much better than `vectorSizeArray` or `elementSizeArray`. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] weibozhao commented on a diff in pull request #156: [FLINK-29323] Refine Transformer for VectorAssembler

2022-09-23 Thread GitBox


weibozhao commented on code in PR #156:
URL: https://github.com/apache/flink-ml/pull/156#discussion_r978481060


##
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/vectorassembler/VectorAssembler.java:
##
@@ -74,38 +74,65 @@ public Table[] transform(Table... inputs) {
 DataStream output =
 tEnv.toDataStream(inputs[0])
 .flatMap(
-new AssemblerFunc(getInputCols(), 
getHandleInvalid()),
+new AssemblerFunction(
+getInputCols(), getHandleInvalid(), 
getSizes()),
 outputTypeInfo);
 Table outputTable = tEnv.fromDataStream(output);
 return new Table[] {outputTable};
 }
 
-private static class AssemblerFunc implements FlatMapFunction {
+private static class AssemblerFunction implements FlatMapFunction {
 private final String[] inputCols;
 private final String handleInvalid;
+private final int[] sizeArray;
 
-public AssemblerFunc(String[] inputCols, String handleInvalid) {
+public AssemblerFunction(String[] inputCols, String handleInvalid, 
int[] sizeArray) {
 this.inputCols = inputCols;
 this.handleInvalid = handleInvalid;
+this.sizeArray = sizeArray;
 }
 
 @Override
 public void flatMap(Row value, Collector out) {
 int nnz = 0;
 int vectorSize = 0;
 try {
-for (String inputCol : inputCols) {
+for (int i = 0; i < inputCols.length; ++i) {
+String inputCol = inputCols[i];
 Object object = value.getField(inputCol);
 Preconditions.checkNotNull(object, "Input column value 
should not be null.");
 if (object instanceof Number) {
+Preconditions.checkArgument(

Review Comment:
   I don't know which record has the error size, then I must check the sizes 
for every record. 
   When the sizes are error, the code also can run OK, that's why we need to 
check every record.   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #20850: [FLINK-20873][Table SQl/API] Update to calcite 1.27

2022-09-23 Thread GitBox


snuyanzin commented on code in PR #20850:
URL: https://github.com/apache/flink/pull/20850#discussion_r978511255


##
flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java:
##
@@ -1950,6 +1950,38 @@ void testCreateTableAsSelectWithPartitionKey() {
 "CREATE TABLE AS SELECT syntax does 
not support to create partitioned table yet."));
 }
 
+/**
+ * Here we override the super method to avoid test error from `ARRAY_AGG` 
supported in original
+ * calcite.
+ */
+@Disabled
+@Test
+void testArrayAgg() {}
+
+/**
+ * Here we override the super method to avoid test error from `GROUP 
CONCAT` supported in
+ * original calcite.
+ */
+@Disabled
+@Test
+void testGroupConcat() {}
+
+/**
+ * Here we override the super method to avoid test error from `STRING_AGG` 
supported in original
+ * calcite.
+ */
+@Disabled
+@Test
+void testStringAgg() {}
+
+/**
+ * Here we override the super method to avoid test error from `EXPLAIN AS 
DOT` supported in
+ * original calcite.
+ */
+@Disabled
+@Test
+void testExplainAsDot() {}

Review Comment:
   https://issues.apache.org/jira/browse/CALCITE-4260



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #20850: [FLINK-20873][Table SQl/API] Update to calcite 1.27

2022-09-23 Thread GitBox


snuyanzin commented on code in PR #20850:
URL: https://github.com/apache/flink/pull/20850#discussion_r978511413


##
flink-table/flink-sql-parser-hive/src/test/java/org/apache/flink/sql/parser/hive/FlinkHiveSqlParserImplTest.java:
##
@@ -560,4 +560,36 @@ void testExplainUpsert() {
 String expected = "EXPLAIN UPSERT INTO `EMPS1`\n" + "VALUES (ROW(1, 
2))";
 this.sql(sql).ok(expected);
 }
+
+/**
+ * Here we override the super method to avoid test error from `ARRAY_AGG` 
supported in original
+ * calcite.
+ */
+@Disabled
+@Test
+void testArrayAgg() {}
+
+/**
+ * Here we override the super method to avoid test error from `GROUP 
CONCAT` supported in
+ * original calcite.
+ */
+@Disabled
+@Test
+void testGroupConcat() {}
+
+/**
+ * Here we override the super method to avoid test error from `STRING_AGG` 
supported in original
+ * calcite.
+ */
+@Disabled
+@Test
+void testStringAgg() {}
+
+/**
+ * Here we override the super method to avoid test error from `EXPLAIN AS 
DOT` supported in
+ * original calcite.
+ */
+@Disabled
+@Test
+void testExplainAsDot() {}
 }

Review Comment:
   https://issues.apache.org/jira/browse/CALCITE-4260



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #20850: [FLINK-20873][Table SQl/API] Update to calcite 1.27

2022-09-23 Thread GitBox


snuyanzin commented on code in PR #20850:
URL: https://github.com/apache/flink/pull/20850#discussion_r978511413


##
flink-table/flink-sql-parser-hive/src/test/java/org/apache/flink/sql/parser/hive/FlinkHiveSqlParserImplTest.java:
##
@@ -560,4 +560,36 @@ void testExplainUpsert() {
 String expected = "EXPLAIN UPSERT INTO `EMPS1`\n" + "VALUES (ROW(1, 
2))";
 this.sql(sql).ok(expected);
 }
+
+/**
+ * Here we override the super method to avoid test error from `ARRAY_AGG` 
supported in original
+ * calcite.
+ */
+@Disabled
+@Test
+void testArrayAgg() {}
+
+/**
+ * Here we override the super method to avoid test error from `GROUP 
CONCAT` supported in
+ * original calcite.
+ */
+@Disabled
+@Test
+void testGroupConcat() {}
+
+/**
+ * Here we override the super method to avoid test error from `STRING_AGG` 
supported in original
+ * calcite.
+ */
+@Disabled
+@Test
+void testStringAgg() {}
+
+/**
+ * Here we override the super method to avoid test error from `EXPLAIN AS 
DOT` supported in
+ * original calcite.
+ */
+@Disabled
+@Test
+void testExplainAsDot() {}
 }

Review Comment:
   Explain as dot was introduced in Calcite at 
https://issues.apache.org/jira/browse/CALCITE-4260



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #20850: [FLINK-20873][Table SQl/API] Update to calcite 1.27

2022-09-23 Thread GitBox


snuyanzin commented on code in PR #20850:
URL: https://github.com/apache/flink/pull/20850#discussion_r978511255


##
flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java:
##
@@ -1950,6 +1950,38 @@ void testCreateTableAsSelectWithPartitionKey() {
 "CREATE TABLE AS SELECT syntax does 
not support to create partitioned table yet."));
 }
 
+/**
+ * Here we override the super method to avoid test error from `ARRAY_AGG` 
supported in original
+ * calcite.
+ */
+@Disabled
+@Test
+void testArrayAgg() {}
+
+/**
+ * Here we override the super method to avoid test error from `GROUP 
CONCAT` supported in
+ * original calcite.
+ */
+@Disabled
+@Test
+void testGroupConcat() {}
+
+/**
+ * Here we override the super method to avoid test error from `STRING_AGG` 
supported in original
+ * calcite.
+ */
+@Disabled
+@Test
+void testStringAgg() {}
+
+/**
+ * Here we override the super method to avoid test error from `EXPLAIN AS 
DOT` supported in
+ * original calcite.
+ */
+@Disabled
+@Test
+void testExplainAsDot() {}

Review Comment:
   Explain as dot was introduced in Calcite at 
https://issues.apache.org/jira/browse/CALCITE-4260



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on pull request #379: [FLINK-29388] Fix to use JobSpec arguments in Standalone Application Mode

2022-09-23 Thread GitBox


gyfora commented on PR #379:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/379#issuecomment-1256070218

   Please rebase before we can merge this


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29398) Utilize Rack Awareness in Flink Consumer

2022-09-23 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608699#comment-17608699
 ] 

Qingsheng Ren commented on FLINK-29398:
---

Thanks for starting the discussion [~jeremy.degroot] ! This is a very 
interesting also useful feature as you described. 

Under the design of FLIP-27 Source API we do expose host name of source readers 
when registering readers on the split enumerator, so it's possible to let split 
enumerator to make assignments according to the mapping of rack and hostname of 
readers. File source has already implemented this feature (see 
{{{}LocalityAwareSplitAssigner{}}}). Currently the split assigning strategy of 
Kafka source is a hard-coded one, so in order to achieve this in KafkaSource we 
need to design a new API to let users provide pluggable split assigner for 
split enumerator in Kafka source. 

Moreover a fully optimized solution would be that Flink scheduler could also 
schedule tasks based on locality, but this is beyond the discussion of this 
ticket. 

[~jeremy.degroot] I'm not sure if your solution is based on the new KafkaSource 
instead of the deprecated FlinkKafkaConsumer (we won't add new features to the 
deprecated one). Would you like to lead the design of this feature? I think a 
new FLIP is expected for this as we are introducing a new feature to Kafka 
source. 

> Utilize Rack Awareness in Flink Consumer
> 
>
> Key: FLINK-29398
> URL: https://issues.apache.org/jira/browse/FLINK-29398
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy DeGroot
>Priority: Major
>
> [KIP-708|https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams]
>  was implemented some time ago in Kafka. This allows brokers and consumers to 
> communicate about the rack (or AWS Availability Zone) they're located in. 
> Reading from a local broker can save money in bandwidth and improve latency 
> for your consumers.
> Flink Kafka consumers currently cannot easily rack awareness if they're 
> deployed across multiple racks or availability zones, because they have no 
> control over which rack the Task Manager they'll be assigned to may be in. 
> This improvement proposes that a Kafka Consumer could be configured with a 
> callback or Future that could be run when it's being configured on the task 
> manager, that will set the appropriate value at runtime if a value is 
> provided. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] avocadomaster commented on pull request #379: [FLINK-29388] Fix to use JobSpec arguments in Standalone Application Mode

2022-09-23 Thread GitBox


avocadomaster commented on PR #379:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/379#issuecomment-1256084073

   > Please rebase before we can merge this
   
   Rebased ✅ 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] SteNicholas commented on a diff in pull request #300: [FLINK-29276] BinaryInMemorySortBuffer supports to clear all memory segments

2022-09-23 Thread GitBox


SteNicholas commented on code in PR #300:
URL: https://github.com/apache/flink-table-store/pull/300#discussion_r978544844


##
flink-table-store-core/src/main/java/org/apache/flink/table/store/file/memory/sort/BinaryInMemorySortBuffer.java:
##
@@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.file.memory.sort;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.runtime.io.disk.SimpleCollectingOutputView;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.binary.BinaryRowData;
+import org.apache.flink.table.runtime.generated.NormalizedKeyComputer;
+import org.apache.flink.table.runtime.generated.RecordComparator;
+import org.apache.flink.table.runtime.operators.sort.BinaryIndexedSortable;
+import org.apache.flink.table.runtime.typeutils.AbstractRowDataSerializer;
+import org.apache.flink.table.runtime.typeutils.BinaryRowDataSerializer;
+import org.apache.flink.table.runtime.util.MemorySegmentPool;
+import org.apache.flink.util.MutableObjectIterator;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.util.ArrayList;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * In memory sort buffer for binary row. The main code is copied from Flink 
{@code
+ * BinaryInMemorySortBuffer} instead of extended because it's a final class.
+ *
+ * The main differences in the new sort buffer are: a) Add clear method to 
clean all memory; b)

Review Comment:
   It's better to list each difference in new line, like as follows:
   The main differences in the new sort buffer are:
   1. xxx
   2. xxx
   3. xxx



##
flink-table-store-core/src/main/java/org/apache/flink/table/store/file/memory/sort/BinaryInMemorySortBuffer.java:
##
@@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.file.memory.sort;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.runtime.io.disk.SimpleCollectingOutputView;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.binary.BinaryRowData;
+import org.apache.flink.table.runtime.generated.NormalizedKeyComputer;
+import org.apache.flink.table.runtime.generated.RecordComparator;
+import org.apache.flink.table.runtime.operators.sort.BinaryIndexedSortable;
+import org.apache.flink.table.runtime.typeutils.AbstractRowDataSerializer;
+import org.apache.flink.table.runtime.typeutils.BinaryRowDataSerializer;
+import org.apache.flink.table.runtime.util.MemorySegmentPool;
+import org.apache.flink.util.MutableObjectIterator;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.util.ArrayList;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * In memory sort buffer for binary row. The main code is copied from Flink 
{@code
+ * BinaryInMemorySortBuffer} instead of extended because it's a final class.
+ *
+ * The main differences in the new sort buffer are: a) Add clear method to 
clean all memory; b)
+ * Add tryInitialized() method to initialize memory before write and read in 
buffer, while the old
+ * buffer will do it in the constructor and reset(); c) Remove reset() and 
ect. methods which are

Review Comment:
   ```suggestion
* buffer will do it in the constructor and reset(); c) Remove reset() and 
etc. methods which are
   ```



-- 
This 

[GitHub] [flink-table-store] SteNicholas commented on a diff in pull request #300: [FLINK-29276] BinaryInMemorySortBuffer supports to clear all memory segments

2022-09-23 Thread GitBox


SteNicholas commented on code in PR #300:
URL: https://github.com/apache/flink-table-store/pull/300#discussion_r978544844


##
flink-table-store-core/src/main/java/org/apache/flink/table/store/file/memory/sort/BinaryInMemorySortBuffer.java:
##
@@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.file.memory.sort;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.runtime.io.disk.SimpleCollectingOutputView;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.binary.BinaryRowData;
+import org.apache.flink.table.runtime.generated.NormalizedKeyComputer;
+import org.apache.flink.table.runtime.generated.RecordComparator;
+import org.apache.flink.table.runtime.operators.sort.BinaryIndexedSortable;
+import org.apache.flink.table.runtime.typeutils.AbstractRowDataSerializer;
+import org.apache.flink.table.runtime.typeutils.BinaryRowDataSerializer;
+import org.apache.flink.table.runtime.util.MemorySegmentPool;
+import org.apache.flink.util.MutableObjectIterator;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.util.ArrayList;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * In memory sort buffer for binary row. The main code is copied from Flink 
{@code
+ * BinaryInMemorySortBuffer} instead of extended because it's a final class.
+ *
+ * The main differences in the new sort buffer are: a) Add clear method to 
clean all memory; b)

Review Comment:
   It's better to list each difference in new line, like as follows:
   ```
   The main differences in the new sort buffer are:
   1. xxx
   2. xxx
   3. xxx
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #20850: [FLINK-20873][Table SQl/API] Update to calcite 1.27

2022-09-23 Thread GitBox


snuyanzin commented on code in PR #20850:
URL: https://github.com/apache/flink/pull/20850#discussion_r978583021


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/batch/sql/CalcTest.xml:
##
@@ -279,7 +279,7 @@ LogicalProject(a=[$0], b=[$1], c=[$2])
 
 
   

[GitHub] [flink] snuyanzin commented on a diff in pull request #20850: [FLINK-20873][Table SQl/API] Update to calcite 1.27

2022-09-23 Thread GitBox


snuyanzin commented on code in PR #20850:
URL: https://github.com/apache/flink/pull/20850#discussion_r978583509


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/batch/sql/CalcTest.xml:
##
@@ -361,13 +361,13 @@ Calc(select=[a, (b + 1) AS EXPR$1], where=[(b > 2)])
 
 
   
 
 
   

[GitHub] [flink] snuyanzin commented on a diff in pull request #20850: [FLINK-20873][Table SQl/API] Update to calcite 1.27

2022-09-23 Thread GitBox


snuyanzin commented on code in PR #20850:
URL: https://github.com/apache/flink/pull/20850#discussion_r978587112


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/stream/sql/CalcTest.xml:
##
@@ -262,7 +262,7 @@ LogicalProject(a=[$0], b=[$1], c=[$2])
 
 
   

[GitHub] [flink] snuyanzin commented on a diff in pull request #20850: [FLINK-20873][Table SQl/API] Update to calcite 1.27

2022-09-23 Thread GitBox


snuyanzin commented on code in PR #20850:
URL: https://github.com/apache/flink/pull/20850#discussion_r978587582


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/stream/sql/CalcTest.xml:
##
@@ -344,13 +344,13 @@ Calc(select=[a, (b + 1) AS EXPR$1], where=[(b > 2)])
 
 
   
 
 
   

[GitHub] [flink] snuyanzin commented on a diff in pull request #20850: [FLINK-20873][Table SQl/API] Update to calcite 1.27

2022-09-23 Thread GitBox


snuyanzin commented on code in PR #20850:
URL: https://github.com/apache/flink/pull/20850#discussion_r978592501


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/stream/table/JoinTest.xml:
##
@@ -84,15 +84,15 @@ Calc(select=[a, b, d, e])
 
   

[GitHub] [flink] snuyanzin commented on a diff in pull request #20850: [FLINK-20873][Table SQl/API] Update to calcite 1.27

2022-09-23 Thread GitBox


snuyanzin commented on code in PR #20850:
URL: https://github.com/apache/flink/pull/20850#discussion_r978592824


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/stream/table/JoinTest.xml:
##
@@ -84,15 +84,15 @@ Calc(select=[a, b, d, e])
 
   
 
 
   

[jira] [Created] (FLINK-29401) Improve observer structure

2022-09-23 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-29401:
--

 Summary: Improve observer structure
 Key: FLINK-29401
 URL: https://issues.apache.org/jira/browse/FLINK-29401
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Reporter: Gyula Fora
Assignee: Gyula Fora
 Fix For: kubernetes-operator-1.3.0


The AbstractDeploymentObserver and SessionJobObserver at this point share a lot 
of common logic due to the unification of other parts.

We should factor out the common parts into an abstract base observer class.

Furthermore we should move the logic of the SavepointObserver into the 
JobStatusObserver where it logically belongs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-23 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608729#comment-17608729
 ] 

Chesnay Schepler commented on FLINK-29315:
--

Experiment results:
- test fails when using a TempDir
- test fails with each test case use a different tmp directory
- test succeeds when only running #testBlobServerRecovery

Will update this as more experiments conclude.

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22   ... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22   java.lang.NullPointerException: 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol commented on pull request #20867: [FLINK-29372] Add suffix to all options that conflict with YAML

2022-09-23 Thread GitBox


zentol commented on PR #20867:
URL: https://github.com/apache/flink/pull/20867#issuecomment-1256170114

   Added a test, which found a few more violations.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #20867: [FLINK-29372] Add suffix to all options that conflict with YAML

2022-09-23 Thread GitBox


zentol commented on code in PR #20867:
URL: https://github.com/apache/flink/pull/20867#discussion_r978603305


##
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/configuration/KubernetesConfigOptions.java:
##
@@ -269,9 +269,10 @@ public class KubernetesConfigOptions {
 @Documentation.OverrideDefault(
 "The default value depends on the actually running version. In 
general it looks like \"flink:-scala_\"")
 public static final ConfigOption CONTAINER_IMAGE =
-key("kubernetes.container.image")
+key("kubernetes.container.image.ref")

Review Comment:
   There does not appear to be a proper term for this, so I used "ref" similar 
to what the log4j uses.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] afedulov commented on a diff in pull request #20876: [hotfix][docs] Fix typos in Kinesis Connector docs.

2022-09-23 Thread GitBox


afedulov commented on code in PR #20876:
URL: https://github.com/apache/flink/pull/20876#discussion_r978609413


##
docs/content.zh/docs/connectors/table/kinesis.md:
##
@@ -776,7 +776,7 @@ You can enable and configure EFO with the following 
properties:
 This is the preferred strategy for the majority of applications.
 However, jobs with parallelism greater than 1 will result in tasks 
competing to register and acquire the stream consumer ARN.
 For jobs with very large parallelism this can result in an increased 
start-up time.
-The describe operation has a limit of 20 [transactions per 
second](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_DescribeStreamConsumer.html),

Review Comment:
   Ah, ok, that makes sense, updated accordingly.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-23 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608729#comment-17608729
 ] 

Chesnay Schepler edited comment on FLINK-29315 at 9/23/22 1:09 PM:
---

Experiment results:
- test fails when using a TempDir
- test fails with each test case use a different tmp directory
- test fails when only running #testHDFS, #testChangingFileNames and 
#testBlobServerRecovery
- test succeeds when only running #testBlobServerRecovery

Will update this as more experiments conclude.


was (Author: zentol):
Experiment results:
- test fails when using a TempDir
- test fails with each test case use a different tmp directory
- test succeeds when only running #testBlobServerRecovery

Will update this as more experiments conclude.

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22   ... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22   java.lang.NullPointerException: 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-23 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608729#comment-17608729
 ] 

Chesnay Schepler edited comment on FLINK-29315 at 9/23/22 1:13 PM:
---

Experiment results:
- #testBlobServerRecovery fails when using a TempDir
- #testBlobServerRecovery fails with each test case using a different tmp 
directory
- #testHDFS fails when only running #testHDFS, #testChangingFileNames and 
#testBlobServerRecovery
- test succeeds when only running #testBlobServerRecovery

Will update this as more experiments conclude.


was (Author: zentol):
Experiment results:
- test fails when using a TempDir
- test fails with each test case use a different tmp directory
- test fails when only running #testHDFS, #testChangingFileNames and 
#testBlobServerRecovery
- test succeeds when only running #testBlobServerRecovery

Will update this as more experiments conclude.

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22   ... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22   java.lang.NullPointerException: 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] twalthr closed pull request #20840: [FLINK-29309][streaming-java] Relax allow-client-job-configurations for Table API and parameters

2022-09-23 Thread GitBox


twalthr closed pull request #20840: [FLINK-29309][streaming-java] Relax 
allow-client-job-configurations for Table API and parameters
URL: https://github.com/apache/flink/pull/20840


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-6573) Flink MongoDB Connector

2022-09-23 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608738#comment-17608738
 ] 

Jiabao Sun commented on FLINK-6573:
---

Hi there,

The discussion thread was opened on 
[https://lists.apache.org/thread/bhzj70t9g6ofdk8hqtfjjlxqnl0l4xwn.]

Looking forward to any comments or feedback. ;)

> Flink MongoDB Connector
> ---
>
> Key: FLINK-6573
> URL: https://issues.apache.org/jira/browse/FLINK-6573
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Affects Versions: 1.2.0
> Environment: Linux Operating System, Mongo DB
>Reporter: Nagamallikarjuna
>Assignee: Jiabao Sun
>Priority: Not a Priority
>  Labels: pull-request-available, stale-assigned
> Attachments: image-2021-11-15-14-41-07-514.png
>
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> Hi Community,
> Currently we are using Flink in the current Project. We have huge amount of 
> data to process using Flink which resides in Mongo DB. We have a requirement 
> of parallel data connectivity in between Flink and Mongo DB for both 
> reads/writes. Currently we are planning to create this connector and 
> contribute to the Community.
> I will update the further details once I receive your feedback 
> Please let us know if you have any concerns.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-29254) RpcGateway should have some form of Close() method

2022-09-23 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-29254:
--

Turns out this isn't actually that difficult.

> RpcGateway should have some form of Close() method
> --
>
> Key: FLINK-29254
> URL: https://issues.apache.org/jira/browse/FLINK-29254
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / RPC
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>
> In a simple client-server model the gateway constitutes the client, which 
> will generally have to allocate some resources to communicate with the server.
> There is however currently no way for a user to close a gateway, hence these 
> resources will generally leak (unless the underlying RPC implementation 
> either magically fixes that somehow or doesn't allocate resources for clients 
> in the first place).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29254) RpcGateway should have some form of Close() method

2022-09-23 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-29254:
-
Parent: FLINK-29281
Issue Type: Sub-task  (was: Technical Debt)

> RpcGateway should have some form of Close() method
> --
>
> Key: FLINK-29254
> URL: https://issues.apache.org/jira/browse/FLINK-29254
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / RPC
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>
> In a simple client-server model the gateway constitutes the client, which 
> will generally have to allocate some resources to communicate with the server.
> There is however currently no way for a user to close a gateway, hence these 
> resources will generally leak (unless the underlying RPC implementation 
> either magically fixes that somehow or doesn't allocate resources for clients 
> in the first place).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29230) Publish blogpost about the Akka license change

2022-09-23 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-29230.

Resolution: Fixed

> Publish blogpost about the Akka license change
> --
>
> Key: FLINK-29230
> URL: https://issues.apache.org/jira/browse/FLINK-29230
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Project Website
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> People reached out to the project via every conceivable channel to inquire 
> about the impact of the recent Akka change; let's write a blogpost to 
> centralize answers to that.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27893) Add additionalPrinterColumns for Job Status and Reconciliation Status in CRDs

2022-09-23 Thread Jeesmon Jacob (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608750#comment-17608750
 ] 

Jeesmon Jacob commented on FLINK-27893:
---

[~gyfora] Can we mark this as closed based on FLINK-29383. Thanks.

> Add additionalPrinterColumns for Job Status and Reconciliation Status in CRDs
> -
>
> Key: FLINK-27893
> URL: https://issues.apache.org/jira/browse/FLINK-27893
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Jeesmon Jacob
>Assignee: Jeesmon Jacob
>Priority: Major
>
> Right now only columns available when you run "kubectl get flinkdeployment" 
> is NAME and AGE. It is beneficial to see the .status.jobStatus.State and 
> .status.ReconciliationStatus.State as additionalPrintColumns in CRD. It will 
> be beneficial for automation to use those columns for status reporting.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27893) Add additionalPrinterColumns for Job Status and Reconciliation Status in CRDs

2022-09-23 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608751#comment-17608751
 ] 

Gyula Fora commented on FLINK-27893:


Ah I did not realize that this was a duplicate. Closing

> Add additionalPrinterColumns for Job Status and Reconciliation Status in CRDs
> -
>
> Key: FLINK-27893
> URL: https://issues.apache.org/jira/browse/FLINK-27893
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Jeesmon Jacob
>Assignee: Jeesmon Jacob
>Priority: Major
>
> Right now only columns available when you run "kubectl get flinkdeployment" 
> is NAME and AGE. It is beneficial to see the .status.jobStatus.State and 
> .status.ReconciliationStatus.State as additionalPrintColumns in CRD. It will 
> be beneficial for automation to use those columns for status reporting.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-27893) Add additionalPrinterColumns for Job Status and Reconciliation Status in CRDs

2022-09-23 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-27893.
--
Resolution: Duplicate

duplicate of FLINK-29383

> Add additionalPrinterColumns for Job Status and Reconciliation Status in CRDs
> -
>
> Key: FLINK-27893
> URL: https://issues.apache.org/jira/browse/FLINK-27893
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Jeesmon Jacob
>Assignee: Jeesmon Jacob
>Priority: Major
>
> Right now only columns available when you run "kubectl get flinkdeployment" 
> is NAME and AGE. It is beneficial to see the .status.jobStatus.State and 
> .status.ReconciliationStatus.State as additionalPrintColumns in CRD. It will 
> be beneficial for automation to use those columns for status reporting.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29402) Add USE_DIRECT_READ configuration parameter for RocksDB

2022-09-23 Thread Donatien (Jira)
Donatien created FLINK-29402:


 Summary: Add USE_DIRECT_READ configuration parameter for RocksDB
 Key: FLINK-29402
 URL: https://issues.apache.org/jira/browse/FLINK-29402
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / State Backends
Affects Versions: 1.15.2
Reporter: Donatien
 Fix For: 1.15.2


RocksDB allows the use of DirectIO for read operations to bypass the Linux Page 
Cache. To understand the impact of Linux Page Cache on performance, one can run 
a heavy workload on a single-tasked Task Manager with a container memory limit 
identical to the TM process memory. Running this same workload on a TM with no 
container memory limit will result in better performances but with the host 
memory exceeding the TM requirement.

Linux Page Cache are of course useful but can give false results when 
benchmarking the Managed Memory used by RocksDB. DirectIO is typically enabled 
for benchmarks on working set estimation [Zwaenepoel et 
al.|[https://arxiv.org/abs/1702.04323].]

I propose to add a configuration key allowing users to enable the use of 
DirectIO for reads thanks to the RocksDB API. This configuration would be 
disabled by default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] gyfora merged pull request #379: [FLINK-29388] Fix to use JobSpec arguments in Standalone Application Mode

2022-09-23 Thread GitBox


gyfora merged PR #379:
URL: https://github.com/apache/flink-kubernetes-operator/pull/379


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-29388) Fix args in JobSpec not being passed through to Flink in Standalone mode

2022-09-23 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-29388.
--
Resolution: Fixed

merged to main e2b829c7df7501760dec8f9aa47685c680b227cf

> Fix args in JobSpec not being passed through to Flink in Standalone mode
> 
>
> Key: FLINK-29388
> URL: https://issues.apache.org/jira/browse/FLINK-29388
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.2.0
>Reporter: Usamah Jassat
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29392) SessionJobs are lost when Session FlinkDeployment is upgraded without HA

2022-09-23 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-29392:
---
Fix Version/s: kubernetes-operator-1.2.0

> SessionJobs are lost when Session FlinkDeployment is upgraded without HA
> 
>
> Key: FLINK-29392
> URL: https://issues.apache.org/jira/browse/FLINK-29392
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0, kubernetes-operator-1.2.0
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Critical
> Fix For: kubernetes-operator-1.2.0
>
>
> Currently SessionJobs are completely lost if the session FlinkDeployment was 
> upgraded and HA wasn't enabled. This is related to FLINK-27979 but its a 
> quite critical manifestation of it.
> After that the session job is never restarted and the observer thinks it's 
> "fine" and keeps it in the RECONCILING state for some reason.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29391) Add option to set labels and annotations in kubernetes deployment

2022-09-23 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-29391.
--
Resolution: Not A Problem

> Add option to set labels and annotations in kubernetes deployment
> -
>
> Key: FLINK-29391
> URL: https://issues.apache.org/jira/browse/FLINK-29391
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Barisa
>Priority: Major
>
> Using 
> [https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/pod-template/,|https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/pod-template/]
>  , it is quite easy to to set any configuration, containers, volumes in the 
> pods.
>  
> However, I have a requirement to be able to set annotations and labels 
> directly on the kubernetes deployments, which manage taskamanger/jobmanager 
> pods.
>  
> In example
> {noformat}
> kc describe deployment basic-example
> Name:   basic-example
> Namespace:  zonda
> CreationTimestamp:  Thu, 22 Sep 2022 10:54:30 +0100
> Labels: app=basic-example
> component=jobmanager
> type=flink-native-kubernetes
> Annotations:deployment.kubernetes.io/revision: 1
> flinkdeployment.flink.apache.org/generation: 2
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29391) Add option to set labels and annotations in kubernetes deployment

2022-09-23 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608778#comment-17608778
 ] 

Gyula Fora commented on FLINK-29391:


you can use the following built in Flink configs:

kubernetes.jobmanager.labels
kubernetes.taskmanager.labels
kubernetes.jobmanager.annotations
kubernetes.taskmanager.annotations

> Add option to set labels and annotations in kubernetes deployment
> -
>
> Key: FLINK-29391
> URL: https://issues.apache.org/jira/browse/FLINK-29391
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Barisa
>Priority: Major
>
> Using 
> [https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/pod-template/,|https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/pod-template/]
>  , it is quite easy to to set any configuration, containers, volumes in the 
> pods.
>  
> However, I have a requirement to be able to set annotations and labels 
> directly on the kubernetes deployments, which manage taskamanger/jobmanager 
> pods.
>  
> In example
> {noformat}
> kc describe deployment basic-example
> Name:   basic-example
> Namespace:  zonda
> CreationTimestamp:  Thu, 22 Sep 2022 10:54:30 +0100
> Labels: app=basic-example
> component=jobmanager
> type=flink-native-kubernetes
> Annotations:deployment.kubernetes.io/revision: 1
> flinkdeployment.flink.apache.org/generation: 2
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] PatrickRen commented on pull request #20734: [FLINK-29093][table] Fix InternalCompilerException in LookupJoinITCase + reset resource counter before each test

2022-09-23 Thread GitBox


PatrickRen commented on PR #20734:
URL: https://github.com/apache/flink/pull/20734#issuecomment-1256286903

   The CI failed because of an unrelated issue FLINK-29315. Merging to master


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] PatrickRen merged pull request #20734: [FLINK-29093][table] Fix InternalCompilerException in LookupJoinITCase + reset resource counter before each test

2022-09-23 Thread GitBox


PatrickRen merged PR #20734:
URL: https://github.com/apache/flink/pull/20734


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] PatrickRen commented on pull request #20890: [FLINK-29093][table] Fix InternalCompilerException in LookupJoinITCas…

2022-09-23 Thread GitBox


PatrickRen commented on PR #20890:
URL: https://github.com/apache/flink/pull/20890#issuecomment-1256289693

   The CI failed because of an unrelated issue FLINK-29315. Merging to 
release-1.16


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] PatrickRen merged pull request #20890: [FLINK-29093][table] Fix InternalCompilerException in LookupJoinITCas…

2022-09-23 Thread GitBox


PatrickRen merged PR #20890:
URL: https://github.com/apache/flink/pull/20890


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-29093) LookupJoinITCase failed with InternalCompilerException

2022-09-23 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren closed FLINK-29093.
-
Resolution: Fixed

> LookupJoinITCase failed with InternalCompilerException
> --
>
> Key: FLINK-29093
> URL: https://issues.apache.org/jira/browse/FLINK-29093
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Xingbo Huang
>Assignee: Alexander Smirnov
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> {code:java}
> 2022-08-24T03:45:02.5915521Z Aug 24 03:45:02 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2022-08-24T03:45:02.5916823Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2022-08-24T03:45:02.5919320Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
> 2022-08-24T03:45:02.5920833Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2022-08-24T03:45:02.5922361Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-08-24T03:45:02.5923733Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-08-24T03:45:02.5924922Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-08-24T03:45:02.5926191Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:268)
> 2022-08-24T03:45:02.5927677Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-08-24T03:45:02.5929091Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-08-24T03:45:02.5930430Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-08-24T03:45:02.5931966Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-08-24T03:45:02.5933293Z Aug 24 03:45:02  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1277)
> 2022-08-24T03:45:02.5934708Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-08-24T03:45:02.5936228Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-08-24T03:45:02.5937998Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-08-24T03:45:02.5939627Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-08-24T03:45:02.5941051Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-08-24T03:45:02.5942650Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-08-24T03:45:02.5944203Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-08-24T03:45:02.5945740Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
> 2022-08-24T03:45:02.5947020Z Aug 24 03:45:02  at 
> akka.dispatch.OnComplete.internal(Future.scala:300)
> 2022-08-24T03:45:02.5948130Z Aug 24 03:45:02  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-08-24T03:45:02.5949255Z Aug 24 03:45:02  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-08-24T03:45:02.5950405Z Aug 24 03:45:02  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-08-24T03:45:02.5951638Z Aug 24 03:45:02  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-08-24T03:45:02.5953564Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
> 2022-08-24T03:45:02.5955214Z Aug 24 03:45:02  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
> 2022-08-24T03:45:02.5956587Z Aug 24 03:45:02  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
> 2022-08-24T03:45:02.5958037Z Aug 24 03:45:02  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
> 2022-08-24T03:45:02.5959448Z Aug 24 03:45:02  at 
>

[jira] [Assigned] (FLINK-29093) LookupJoinITCase failed with InternalCompilerException

2022-09-23 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-29093:
-

Assignee: Alexander Smirnov  (was: Qingsheng Ren)

> LookupJoinITCase failed with InternalCompilerException
> --
>
> Key: FLINK-29093
> URL: https://issues.apache.org/jira/browse/FLINK-29093
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Xingbo Huang
>Assignee: Alexander Smirnov
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> {code:java}
> 2022-08-24T03:45:02.5915521Z Aug 24 03:45:02 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2022-08-24T03:45:02.5916823Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2022-08-24T03:45:02.5919320Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
> 2022-08-24T03:45:02.5920833Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2022-08-24T03:45:02.5922361Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-08-24T03:45:02.5923733Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-08-24T03:45:02.5924922Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-08-24T03:45:02.5926191Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:268)
> 2022-08-24T03:45:02.5927677Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-08-24T03:45:02.5929091Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-08-24T03:45:02.5930430Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-08-24T03:45:02.5931966Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-08-24T03:45:02.5933293Z Aug 24 03:45:02  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1277)
> 2022-08-24T03:45:02.5934708Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-08-24T03:45:02.5936228Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-08-24T03:45:02.5937998Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-08-24T03:45:02.5939627Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-08-24T03:45:02.5941051Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-08-24T03:45:02.5942650Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-08-24T03:45:02.5944203Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-08-24T03:45:02.5945740Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
> 2022-08-24T03:45:02.5947020Z Aug 24 03:45:02  at 
> akka.dispatch.OnComplete.internal(Future.scala:300)
> 2022-08-24T03:45:02.5948130Z Aug 24 03:45:02  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-08-24T03:45:02.5949255Z Aug 24 03:45:02  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-08-24T03:45:02.5950405Z Aug 24 03:45:02  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-08-24T03:45:02.5951638Z Aug 24 03:45:02  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-08-24T03:45:02.5953564Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
> 2022-08-24T03:45:02.5955214Z Aug 24 03:45:02  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
> 2022-08-24T03:45:02.5956587Z Aug 24 03:45:02  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
> 2022-08-24T03:45:02.5958037Z Aug 24 03:45:02  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
> 2022-08-24

[jira] [Commented] (FLINK-29093) LookupJoinITCase failed with InternalCompilerException

2022-09-23 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608780#comment-17608780
 ] 

Qingsheng Ren commented on FLINK-29093:
---

Fixed on master: 340b100f2de5e0d90ba475aa8a00e359a61442ce 

release-1.16: c10a727990668b1a0d706f16e4f5220780422ed9

> LookupJoinITCase failed with InternalCompilerException
> --
>
> Key: FLINK-29093
> URL: https://issues.apache.org/jira/browse/FLINK-29093
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Xingbo Huang
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> {code:java}
> 2022-08-24T03:45:02.5915521Z Aug 24 03:45:02 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2022-08-24T03:45:02.5916823Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2022-08-24T03:45:02.5919320Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
> 2022-08-24T03:45:02.5920833Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2022-08-24T03:45:02.5922361Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-08-24T03:45:02.5923733Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-08-24T03:45:02.5924922Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-08-24T03:45:02.5926191Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:268)
> 2022-08-24T03:45:02.5927677Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-08-24T03:45:02.5929091Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-08-24T03:45:02.5930430Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-08-24T03:45:02.5931966Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-08-24T03:45:02.5933293Z Aug 24 03:45:02  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1277)
> 2022-08-24T03:45:02.5934708Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-08-24T03:45:02.5936228Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-08-24T03:45:02.5937998Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-08-24T03:45:02.5939627Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-08-24T03:45:02.5941051Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-08-24T03:45:02.5942650Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-08-24T03:45:02.5944203Z Aug 24 03:45:02  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-08-24T03:45:02.5945740Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
> 2022-08-24T03:45:02.5947020Z Aug 24 03:45:02  at 
> akka.dispatch.OnComplete.internal(Future.scala:300)
> 2022-08-24T03:45:02.5948130Z Aug 24 03:45:02  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-08-24T03:45:02.5949255Z Aug 24 03:45:02  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-08-24T03:45:02.5950405Z Aug 24 03:45:02  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-08-24T03:45:02.5951638Z Aug 24 03:45:02  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-08-24T03:45:02.5953564Z Aug 24 03:45:02  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
> 2022-08-24T03:45:02.5955214Z Aug 24 03:45:02  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
> 2022-08-24T03:45:02.5956587Z Aug 24 03:45:02  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
> 2022-08-24T03:45:02.5958037Z Aug 24 03:45:02  at 
>

[GitHub] [flink] PatrickRen commented on pull request #20889: [BP-1.16][FLINK-28890][table] Fix semantic of latestLoadTime in caching lookup function

2022-09-23 Thread GitBox


PatrickRen commented on PR #20889:
URL: https://github.com/apache/flink/pull/20889#issuecomment-1256296543

   The CI failed because of an unrelated issue FLINK-29315. Merging to 
release-1.16


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] PatrickRen merged pull request #20889: [BP-1.16][FLINK-28890][table] Fix semantic of latestLoadTime in caching lookup function

2022-09-23 Thread GitBox


PatrickRen merged PR #20889:
URL: https://github.com/apache/flink/pull/20889


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-28890) Incorrect semantic of latestLoadTime in CachingLookupFunction and CachingAsyncLookupFunction

2022-09-23 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608559#comment-17608559
 ] 

Qingsheng Ren edited comment on FLINK-28890 at 9/23/22 2:35 PM:


Fixed on master: 24c685a58ef72db4c64c90e37056a07eb562be15

release-1.16: 0830c2ac819fd33628d11315b2485a12c23af5e6


was (Author: renqs):
Fixed on master: 24c685a58ef72db4c64c90e37056a07eb562be15

> Incorrect semantic of latestLoadTime in CachingLookupFunction and 
> CachingAsyncLookupFunction
> 
>
> Key: FLINK-28890
> URL: https://issues.apache.org/jira/browse/FLINK-28890
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> The semantic of latestLoadTime in CachingLookupFunction and 
> CachingAsyncLookupFunction is not correct, which should be the time spent for 
> the latest load operation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-28890) Incorrect semantic of latestLoadTime in CachingLookupFunction and CachingAsyncLookupFunction

2022-09-23 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren closed FLINK-28890.
-
Resolution: Fixed

> Incorrect semantic of latestLoadTime in CachingLookupFunction and 
> CachingAsyncLookupFunction
> 
>
> Key: FLINK-28890
> URL: https://issues.apache.org/jira/browse/FLINK-28890
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> The semantic of latestLoadTime in CachingLookupFunction and 
> CachingAsyncLookupFunction is not correct, which should be the time spent for 
> the latest load operation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] PatrickRen merged pull request #20884: [FLINK-29389][docs] Update documentation of JDBC and HBase lookup table for new caching options

2022-09-23 Thread GitBox


PatrickRen merged PR #20884:
URL: https://github.com/apache/flink/pull/20884


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] PatrickRen opened a new pull request, #20892: [BP-1.16][FLINK-29389][docs] Update documentation of JDBC and HBase lookup table for new caching options

2022-09-23 Thread GitBox


PatrickRen opened a new pull request, #20892:
URL: https://github.com/apache/flink/pull/20892

   Unchanged backport of #20884 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #20850: [FLINK-20873][Table SQl/API] Update to calcite 1.27

2022-09-23 Thread GitBox


snuyanzin commented on code in PR #20850:
URL: https://github.com/apache/flink/pull/20850#discussion_r978758053


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/batch/table/SetOperatorsTest.xml:
##
@@ -99,13 +99,13 @@ HashJoin(joinType=[LeftSemiJoin], where=[(c = a1)], 
select=[a, b, c], build=[rig
   
 
   
 
 
   

[GitHub] [flink] flinkbot commented on pull request #20892: [BP-1.16][FLINK-29389][docs] Update documentation of JDBC and HBase lookup table for new caching options

2022-09-23 Thread GitBox


flinkbot commented on PR #20892:
URL: https://github.com/apache/flink/pull/20892#issuecomment-1256318703

   
   ## CI report:
   
   * 01500feb824aee4a12ffbadc00540eee5e6e5c5b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #20850: [FLINK-20873][Table SQl/API] Update to calcite 1.27

2022-09-23 Thread GitBox


snuyanzin commented on code in PR #20850:
URL: https://github.com/apache/flink/pull/20850#discussion_r978763440


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/stream/sql/join/JoinTest.xml:
##
@@ -72,17 +72,17 @@ Join(joinType=[InnerJoin], where=[(((a = 1) AND (x = 1)) OR 
((a = 2) AND y IS NU
 
   
 
 
   
 
 
   

[jira] [Closed] (FLINK-29309) Relax allow-client-job-configurations for Table API and parameters

2022-09-23 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-29309.

Fix Version/s: 1.16.0
   Resolution: Fixed

Fixed in master: 298b88842025da9ae218b598c1213f7238c0df7c
Fixed in 1.16: c917a6e8a09d42dc87ba27afdfd42a2a640bb984

> Relax allow-client-job-configurations for Table API and parameters
> --
>
> Key: FLINK-29309
> URL: https://issues.apache.org/jira/browse/FLINK-29309
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> Currently, the {{execution.allow-client-job-configurations}} is a bit too 
> strict. Due to the complexity of the configuration stack, it makes it 
> impossible to use Table API and also prevents very common parameters like 
> {{pipeline.global-job-parameters}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] dannycranmer merged pull request #20876: [hotfix][docs] Fix typos in Kinesis Connector docs.

2022-09-23 Thread GitBox


dannycranmer merged PR #20876:
URL: https://github.com/apache/flink/pull/20876


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] JingsongLi commented on pull request #300: [FLINK-29276] BinaryInMemorySortBuffer supports to clear all memory segments

2022-09-23 Thread GitBox


JingsongLi commented on PR #300:
URL: 
https://github.com/apache/flink-table-store/pull/300#issuecomment-1256357112

   > To @JingsongLi I have tried to decrease memory buffers in 
WritePreemptMemoryTest. createFileStoreTable. Unfortunately, it causes 
java.lang.RuntimeException: File deletion conflicts detected! Give up 
committing compact changes which is the same as the master branch behavior, is 
it a known problem?
   
   I think it is problem in `testPartitionEmptyWriter` test. Let it go. We 
don't have to decrease memory buffers in 
`WritePreemptMemoryTest.createFileStoreTable`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] JingsongLi merged pull request #300: [FLINK-29276] BinaryInMemorySortBuffer supports to clear all memory segments

2022-09-23 Thread GitBox


JingsongLi merged PR #300:
URL: https://github.com/apache/flink-table-store/pull/300


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-29276) Flush all memory in SortBufferMemTable.clear

2022-09-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-29276.

Fix Version/s: table-store-0.2.1
   Resolution: Fixed

master: 04418793c67d196b4056c5cb6730eb3e3931cbd4
release-0.2: b82f41a8c75f08e683377645a2ae6bb9eee1e755

> Flush all memory in SortBufferMemTable.clear
> 
>
> Key: FLINK-29276
> URL: https://issues.apache.org/jira/browse/FLINK-29276
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Shammon
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.3.0, table-store-0.2.1
>
>
> Now BinaryInMemorySortBuffer.reset will keep one page.
> We could free all memory.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29396) Race condition in JobMaster shutdown can leak resource requirements

2022-09-23 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608814#comment-17608814
 ] 

Matthias Pohl commented on FLINK-29396:
---

I guess, you have a point here. Initially, I thought that there's a message 
back from the {{ResourceManager}} to the {{JobMaster}} missing. That would then 
trigger the shutdown of the {{JobMaster}} and, as a consequence, trigger the 
stopping of the corresponding {{JobMasterServiceLeadershipRunner}} (in 
[JobMasterServiceLeadershipRunner:126|https://github.com/apache/flink/blob/e8a91fd8428e417c63b299392a84f7df9d10ddb8/flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMasterServiceLeadershipRunner.java#L126]).
 But this message is actually send in 
[ResourceManager#closeJobManagerConnection:1072|https://github.com/apache/flink/blob/0263b55288be7b569f56dd42a94c5e48bcc1607b/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java#L1072]

Essentially, we would have to instantiate a {{CompletableFuture}} in 
[JobMaster#stopExecution:1022|https://github.com/apache/flink/blob/b7dd42617a46fcecfffbea3409391e204a40b9b1/flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java#L1022],
 compose it with the {{terminationFuture}} there and let this future be 
completed in 
[JobMaster#disconnectResourceManager|https://github.com/apache/flink/blob/b7dd42617a46fcecfffbea3409391e204a40b9b1/flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java#L817].
 That will make the JobMaster shutdown process proceed after we got the 
confirmation from the {{ResourceManager}} that the disconnect succeeded. WDYT?

> Race condition in JobMaster shutdown can leak resource requirements
> ---
>
> Key: FLINK-29396
> URL: https://issues.apache.org/jira/browse/FLINK-29396
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Priority: Blocker
>
> When a JobMaster is stopped it
> a) sends a message to the RM informing it of the final job status
> b) removes itself as the leader.
> Once the JM loses leadership the RM is also informed about that.
> With that we have 2 messages being sent to the RM at about the same time.
> If the shutdown notifications arrives first (and job is in a terminal state) 
> we wipe the resource requirements, and the leader loss notification is 
> effectively ignored.
> If the leader loss notification arrives first we keep the resource 
> requirements, assuming that another JM will pick the job up later on, and the 
> shutdown notification will be ignored.
> This can cause a session cluster to essentially do nothing until the job 
> timeout is triggered due to no leader being present (default 5 minutes).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-23 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608729#comment-17608729
 ] 

Chesnay Schepler edited comment on FLINK-29315 at 9/23/22 4:40 PM:
---

Experiment results:
- #testBlobServerRecovery fails when using a TempDir
- #testBlobServerRecovery fails with each test case using a different tmp 
directory
- #testHDFS fails when only running #testHDFS, #testChangingFileNames and 
#testBlobServerRecovery
- test failed when only running the tests in flink-fs-tests
- test succeeds when only running HDFSTest
- test succeeds when only running #testBlobServerRecovery

Will update this as more experiments conclude.


was (Author: zentol):
Experiment results:
- #testBlobServerRecovery fails when using a TempDir
- #testBlobServerRecovery fails with each test case using a different tmp 
directory
- #testHDFS fails when only running #testHDFS, #testChangingFileNames and 
#testBlobServerRecovery
- test succeeds when only running #testBlobServerRecovery

Will update this as more experiments conclude.

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22   ... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22   java.lang.NullPointerException: 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-opensearch] reta opened a new pull request, #1: [FLINK-25756] [connectors/opensearch] Dedicated Opensearch connectors

2022-09-23 Thread GitBox


reta opened a new pull request, #1:
URL: https://github.com/apache/flink-connector-opensearch/pull/1

   Signed-off-by: Andriy Redko 
   
   
   
   ## What is the purpose of the change
   
   The goal of this change is to provide dedicated Opensearch connectors.
   
   ## Brief change log
   
   The implementation is largely based on the existing Elasticsearch 7 
connector with a few notable changes (besides the dependencies and APIs):
- any mentions and uses of mapping types have been removed: it is 
deprecated feature, scheduled for removal (the indices with mapping types 
cannot be created or migrated to Opensearch 1.x and beyond)
- any mentions and uses have been removed: it is deprecated feature, 
scheduled for removal (only `HighLevelRestClient` is used)
- the default distributions of Opensearch come with HTTPS turned on, using 
self-signed certificates: to simplify the integration a new option 
`allow-insecure` has been added to suppress certificates validation for 
development and testing purposes
- old streaming APIs are also supported to facilitate the migration of 
existing applications from Elasticsearch 7/6 to Opensearch (the classes will 
change but the familiar model will stay)
   
   The new connector name is `opensearch` and it follows the existing 
conventions:
   
   ```
   CREATE TABLE users ( ... ) WITH (
 'connector' = 'opensearch', 
 'hosts' = 'https://localhost:9200',
 'index' = 'users', 
 'allow-insecure' = 'true', 
 'username' = 'admin', 
 'password' = 'admin');
   ```
   
   ## Verifying this change
   
   This change added comprehensive tests and can be verified as follows 
(largely ported the existing unit and integration tests for Elasticsearch 7):
 - Added unit tests
 - Added integration tests for end-to-end
 - Added end-to-end tests
 - Manually verified the connector by running a node clusters
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies: yes (the latest Opensearch 1.2.4 APIs as of this moment)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? (docs - in progress, JavaDocs)
   
   Huge thanks @snuyanzin for help.
   Retargeting https://github.com/apache/flink/pull/18541 to separate repository


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] stevenzwu commented on a diff in pull request #20852: [FLINK-27101][checkpointing][rest] Add restful API to trigger checkpoints

2022-09-23 Thread GitBox


stevenzwu commented on code in PR #20852:
URL: https://github.com/apache/flink/pull/20852#discussion_r978880123


##
flink-core/src/main/java/org/apache/flink/core/execution/CheckpointBackupType.java:
##
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.core.execution;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.configuration.DescribedEnum;
+import org.apache.flink.configuration.description.InlineElement;
+
+import static org.apache.flink.configuration.description.TextElement.text;
+
+/** Describes the backup type in which a checkpoint should be taken. */
+@PublicEvolving
+public enum CheckpointBackupType implements DescribedEnum {

Review Comment:
   @leletan We probably don't need this enum at all and existing 
`CheckpointType` should we sufficient. There can be two triggering modes for 
checkpoint via REST endpoint.
   
   - `CHECKPOINT`: checkpoint is triggered based on the job configured 
checkpoint mode (incremental or full)
   - `FULL_CHECKPOINT`: checkpoint is triggered for full checkpoint regardless 
of the job config
   
   If in the future there is a need for triggering incremental checkpoint 
regardless of the job config, we can add a new enum value 
`INCREMENTAL_CHECKPOINT` to the existing `CheckpointType` later.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] stevenzwu commented on a diff in pull request #20852: [FLINK-27101][checkpointing][rest] Add restful API to trigger checkpoints

2022-09-23 Thread GitBox


stevenzwu commented on code in PR #20852:
URL: https://github.com/apache/flink/pull/20852#discussion_r978880123


##
flink-core/src/main/java/org/apache/flink/core/execution/CheckpointBackupType.java:
##
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.core.execution;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.configuration.DescribedEnum;
+import org.apache.flink.configuration.description.InlineElement;
+
+import static org.apache.flink.configuration.description.TextElement.text;
+
+/** Describes the backup type in which a checkpoint should be taken. */
+@PublicEvolving
+public enum CheckpointBackupType implements DescribedEnum {

Review Comment:
   @leletan @pnowojski  We probably don't need this enum at all and existing 
public `CheckpointType` class should we sufficient. There can be two triggering 
modes for checkpoint via REST endpoint.
   
   - `CHECKPOINT`: checkpoint is triggered based on the job configured 
checkpoint mode (incremental or full)
   - `FULL_CHECKPOINT`: checkpoint is triggered for full checkpoint regardless 
of the job config
   
   If in the future there is a need for triggering incremental checkpoint 
regardless of the job config, we can add a new enum value 
`INCREMENTAL_CHECKPOINT` to the existing `CheckpointType` later.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] ericxiao251 commented on pull request #19983: [FLINK-27878][datastream] Add Retry Support For Async I/O In DataStream API

2022-09-23 Thread GitBox


ericxiao251 commented on PR #19983:
URL: https://github.com/apache/flink/pull/19983#issuecomment-1256536753

   Hi @lincoln-lil, how would one use the retry strategies in their scala flink 
pipelines? we noticed that the retry strategy builder classes were only exposed 
in the java codebase.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-23 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608729#comment-17608729
 ] 

Chesnay Schepler edited comment on FLINK-29315 at 9/23/22 6:41 PM:
---

Experiment results:
- #testBlobServerRecovery fails when using a TempDir
- #testBlobServerRecovery fails with each test case using a different tmp 
directory
- #testHDFS fails when only running #testHDFS, #testChangingFileNames and 
#testBlobServerRecovery
- test failed when only running the tests in flink-fs-tests
- test failed when only running full HDFSTest
- test succeeds when only running #testHDFS, #testChangingFileNames and 
#testBlobServerRecovery and no other test
- test succeeds when only running #testBlobServerRecovery

Will update this as more experiments conclude.


was (Author: zentol):
Experiment results:
- #testBlobServerRecovery fails when using a TempDir
- #testBlobServerRecovery fails with each test case using a different tmp 
directory
- #testHDFS fails when only running #testHDFS, #testChangingFileNames and 
#testBlobServerRecovery
- test failed when only running the tests in flink-fs-tests
- test succeeds when only running HDFSTest
- test succeeds when only running #testBlobServerRecovery

Will update this as more experiments conclude.

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029

[jira] [Created] (FLINK-29403) Streamline SimpleCondition usage

2022-09-23 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-29403:


 Summary: Streamline SimpleCondition usage
 Key: FLINK-29403
 URL: https://issues.apache.org/jira/browse/FLINK-29403
 Project: Flink
  Issue Type: Improvement
  Components: Library / CEP
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.17.0


CEP SimpleCondition are essentially filter functions, but since it's an 
abstract class it ends up being incredibly verbose.

Additionally the class should not be annotated with {{@Internal}} given how 
much it is advertised in the docs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29403) Streamline SimpleCondition usage

2022-09-23 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-29403:
-
Description: 
CEP SimpleCondition are essentially filter functions, but since it's an 
abstract class it ends up being incredibly verbose. We can add a simple factory 
method to streamline this.

Additionally the class should not be annotated with {{@Internal}} given how 
much it is advertised in the docs.

  was:
CEP SimpleCondition are essentially filter functions, but since it's an 
abstract class it ends up being incredibly verbose.

Additionally the class should not be annotated with {{@Internal}} given how 
much it is advertised in the docs.


> Streamline SimpleCondition usage
> 
>
> Key: FLINK-29403
> URL: https://issues.apache.org/jira/browse/FLINK-29403
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / CEP
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.17.0
>
>
> CEP SimpleCondition are essentially filter functions, but since it's an 
> abstract class it ends up being incredibly verbose. We can add a simple 
> factory method to streamline this.
> Additionally the class should not be annotated with {{@Internal}} given how 
> much it is advertised in the docs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26999) Introduce ClickHouse Connector

2022-09-23 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608885#comment-17608885
 ] 

Robert Metzger commented on FLINK-26999:


Is the idea for this ticket to use an existing flink-clickhouse implementation 
(e.g. https://github.com/itinycheng/flink-connector-clickhouse) or something 
from scratch?

> Introduce ClickHouse Connector
> --
>
> Key: FLINK-26999
> URL: https://issues.apache.org/jira/browse/FLINK-26999
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Affects Versions: 1.15.0
>Reporter: ZhuoYu Chen
>Priority: Major
>
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-202%3A+Introduce+ClickHouse+Connector]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol opened a new pull request, #20893: [FLINK-29403][cep] Streamline SimpleCondition usage

2022-09-23 Thread GitBox


zentol opened a new pull request, #20893:
URL: https://github.com/apache/flink/pull/20893

   Add `SimpleCondition.of(FilterFunction)` and adjust all tests and docs 
accordingly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >