Re: [PR] [FLINK-33187] using hashcode for parallelism map comparison [flink-kubernetes-operator]

2023-10-22 Thread via GitHub


gyfora commented on code in PR #685:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/685#discussion_r1368147631


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/KubernetesAutoScalerEventHandler.java:
##
@@ -43,26 +52,71 @@ public void handleEvent(
 String reason,
 String message,
 @Nullable String messageKey,
-@Nullable Duration interval) {
-if (interval == null) {
-eventRecorder.triggerEvent(
-context.getResource(),
-EventRecorder.Type.valueOf(type.name()),
-reason,
-message,
-EventRecorder.Component.Operator,
-messageKey,
-context.getKubernetesClient());
+@Nonnull Duration interval) {
+eventRecorder.triggerEvent(
+context.getResource(),
+EventRecorder.Type.valueOf(type.name()),
+reason,
+message,
+EventRecorder.Component.Operator,
+messageKey,
+context.getKubernetesClient());
+}
+
+@Override
+public void handleScalingEvent(
+KubernetesJobAutoScalerContext context,
+Map scalingSummaries,
+boolean scaled,
+@Nonnull Duration interval) {
+if (scaled) {
+AutoScalerEventHandler.super.handleScalingEvent(
+context, scalingSummaries, scaled, interval);
 } else {
-eventRecorder.triggerEventByInterval(
+var conf = context.getConfiguration();
+var scalingReport = 
AutoScalerEventHandler.scalingReport(scalingSummaries, scaled);
+var labels = Map.of(PARALLELISM_MAP_KEY, 
getParallelismHashCode(scalingSummaries));

Review Comment:
   @1996fanrui The problem with your approach is that it breaks the messageKey 
logic for scaling events (you set a new key every time the parallelism changes 
but only when scaling is off)
   
   The general problem with the current interface is that it already contains 
many Kubernetes specific parts (messageKey) but not enough to properly 
implement new functionality (such as the one we are trying here to deduplicate 
events based on certain logic, like the actual parallelism overrides)
   
   In other interfaces like the StateStore we decided to go with specific 
methods, and I think that's what we should do here as well. I prefer @clarax s 
solution as it allows us complete flexibility in the Kubernetes implementation 
without pushing anything on to the interface.
   
   To give a bit more background on the `messageKey` , we use that to define 
the specific event when triggering in Kubernetes such that we keep the history 
(last triggered, etc) without always creating new event objects. This is 
completely independent of the interval / parallelism based triggering that are 
trying to do here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33090][checkpointing] CheckpointsCleaner clean individual chec… [flink]

2023-10-22 Thread via GitHub


yigress commented on PR #23425:
URL: https://github.com/apache/flink/pull/23425#issuecomment-1774447810

   @pnowojski could you review again? appreciate every constructive review you 
gave!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32661][sql-gateway] Fix unstable OperationRelatedITCase.testOperationRelatedApis [flink]

2023-10-22 Thread via GitHub


flinkbot commented on PR #23564:
URL: https://github.com/apache/flink/pull/23564#issuecomment-1774393422

   
   ## CI report:
   
   * bff9234113ad13e9d535bef3761b0a61a168f956 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32661][sql-gateway] Fix unstable OperationRelatedITCase.testOperationRelatedApis [flink]

2023-10-22 Thread via GitHub


Jiabao-Sun commented on PR #23564:
URL: https://github.com/apache/flink/pull/23564#issuecomment-1774391325

   Hi @WencongLiu, @fsk119.
   Could you help review it when you have time?
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32661) OperationRelatedITCase.testOperationRelatedApis fails on AZP

2023-10-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32661:
---
Labels: auto-deprioritized-critical pull-request-available test-stability  
(was: auto-deprioritized-critical test-stability)

> OperationRelatedITCase.testOperationRelatedApis fails on AZP
> 
>
> Key: FLINK-32661
> URL: https://issues.apache.org/jira/browse/FLINK-32661
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
> Attachments: screenshot-1.png
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51452=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=12114
> fails as 
> {noformat}
> Jul 20 04:23:49 org.opentest4j.AssertionFailedError: 
> Jul 20 04:23:49 
> Jul 20 04:23:49 Expecting actual's toString() to return:
> Jul 20 04:23:49   "PENDING"
> Jul 20 04:23:49 but was:
> Jul 20 04:23:49   "RUNNING"
> Jul 20 04:23:49   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jul 20 04:23:49   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jul 20 04:23:49   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jul 20 04:23:49   at 
> org.apache.flink.table.gateway.rest.OperationRelatedITCase.testOperationRelatedApis(OperationRelatedITCase.java:91)
> Jul 20 04:23:49   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 20 04:23:49   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 20 04:23:49   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 20 04:23:49   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 20 04:23:49   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:21
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-32661][sql-gateway] Fix unstable OperationRelatedITCase.testOperationRelatedApis [flink]

2023-10-22 Thread via GitHub


Jiabao-Sun opened a new pull request, #23564:
URL: https://github.com/apache/flink/pull/23564

   
   
   ## What is the purpose of the change
   
   [FLINK-32661][sql-gateway] Fix unstable 
OperationRelatedITCase.testOperationRelatedApis
   
   ## Brief change log
   
   The operation execution is asynchronous and the status will only be updated 
from Pending to Running after calling from the runBefore method.
   
   ```
   Jul 20 04:23:49 org.opentest4j.AssertionFailedError: 
   Jul 20 04:23:49 
   Jul 20 04:23:49 Expecting actual's toString() to return:
   Jul 20 04:23:49   "PENDING"
   Jul 20 04:23:49 but was:
   Jul 20 04:23:49   "RUNNING"
   Jul 20 04:23:49  at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   Jul 20 04:23:49  at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   Jul 20 04:23:49  at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   Jul 20 04:23:49  at 
org.apache.flink.table.gateway.rest.OperationRelatedITCase.testOperationRelatedApis(OperationRelatedITCase.java:91)
   Jul 20 04:23:49  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   Jul 20 04:23:49  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   Jul 20 04:23:49  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   Jul 20 04:23:49  at java.lang.reflect.Method.invoke(Method.java:498)
   Jul 20 04:23:49  at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
   Jul 20 04:23:49  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:21
   ```
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33112) Support placement constraint

2023-10-22 Thread Junfan Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778513#comment-17778513
 ] 

Junfan Zhang commented on FLINK-33112:
--

[https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html]
 The detailed info is here.

> Support placement constraint
> 
>
> Key: FLINK-33112
> URL: https://issues.apache.org/jira/browse/FLINK-33112
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / YARN
>Reporter: Junfan Zhang
>Priority: Major
>
> Yarn placement constraint is introduced in hadoop3.2.0 , which is useful for 
> specify affinity or anti-affinity or colocation with K8s



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32611] Redirect to Apache Paimon's link instead of legacy flink table store [flink-web]

2023-10-22 Thread via GitHub


Myasuka commented on code in PR #665:
URL: https://github.com/apache/flink-web/pull/665#discussion_r1368093090


##
docs/content.zh/documentation/flink-table-store-0.3.md:
##
@@ -1,7 +1,7 @@
 ---
-weight: 10
-title: Table Store Master (snapshot)

Review Comment:
   Since flink-table-store-0.3 is the last version when Paimon is part of Flink 
and cannot be known from Apache Paimon's website. I prefer to keep the link 
here if someone wants to review the history of Apache Paimon.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33154) flink on k8s,An error occurred during consuming rocketmq

2023-10-22 Thread Monody (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778512#comment-17778512
 ] 

Monody commented on FLINK-33154:


[~martijnvisser]  hi~ Thank you for paying attention to this issue. I'm using 
[rocketmq-flink connector|https://github.com/apache/rocketmq-flink], It is 
production ready.

This leaves me wondering where the problem might be, do you have any possible 
guesses?

> flink on k8s,An error occurred during consuming rocketmq
> 
>
> Key: FLINK-33154
> URL: https://issues.apache.org/jira/browse/FLINK-33154
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
> Environment: 
> flink-kubernetes-operator:https://github.com/apache/flink-kubernetes-operator#current-api-version-v1beta1
> rocketmq-flink:https://github.com/apache/rocketmq-flink
>Reporter: Monody
>Priority: Major
>  Labels: RocketMQ, flink-kubernetes-operator
>
> The following error occurs when flink consumes rocketmq. The flink job is 
> running on k8s, and the projects used are:
> The projects used by flink to consume rocketmq are:
> The flink job runs normally on yarn, and no abnormality is found on the 
> rocketmq server. Why does this happen? and how to solve it?
> !https://user-images.githubusercontent.com/47728686/265662530-231c500c-fd64-4679-9b0f-ff4a025dd766.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33316) Avoid unnecessary heavy getStreamOperatorFactory

2023-10-22 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-33316:

Fix Version/s: 1.17.2
   1.18.1

> Avoid unnecessary heavy getStreamOperatorFactory
> 
>
> Key: FLINK-33316
> URL: https://issues.apache.org/jira/browse/FLINK-33316
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.17.0, 1.18.0
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.2, 1.19.0, 1.18.1
>
>
> See FLINK-33315 for details.
> This Jira focus on avoid unnecessary heavy getStreamOperatorFactory, it can 
> optimize the memory and cpu cost of Replica_2 in FLINK-33315.
> Solution: We can store the serializedUdfClassName at StreamConfig, and using 
> the getStreamOperatorFactoryClassName instead of the heavy 
> getStreamOperatorFactory in OperatorChain#getOperatorRecordsOutCounter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33316) Avoid unnecessary heavy getStreamOperatorFactory

2023-10-22 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778510#comment-17778510
 ] 

Rui Fan commented on FLINK-33316:
-

The change is subtle, so I push this commit directly.

Merged 1.17: 024fa4776d0246a283a70743f1ce3c04981daeb9

Merged 1.18: 0dd3b4ce9f0b9f193926445bf9c1f8579fa86161

 

> Avoid unnecessary heavy getStreamOperatorFactory
> 
>
> Key: FLINK-33316
> URL: https://issues.apache.org/jira/browse/FLINK-33316
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.17.0, 1.18.0
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> See FLINK-33315 for details.
> This Jira focus on avoid unnecessary heavy getStreamOperatorFactory, it can 
> optimize the memory and cpu cost of Replica_2 in FLINK-33315.
> Solution: We can store the serializedUdfClassName at StreamConfig, and using 
> the getStreamOperatorFactoryClassName instead of the heavy 
> getStreamOperatorFactory in OperatorChain#getOperatorRecordsOutCounter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33313) RexNodeExtractor fails to extract conditions with binary literal

2023-10-22 Thread Dan Zou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778505#comment-17778505
 ] 

Dan Zou commented on FLINK-33313:
-

[~libenchao] OK, I will do it.

> RexNodeExtractor fails to extract conditions with binary literal
> 
>
> Key: FLINK-33313
> URL: https://issues.apache.org/jira/browse/FLINK-33313
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Dan Zou
>Assignee: Dan Zou
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> An exception will be thrown when we try to extract conditions with binary 
> literal in RexNodeExtractor. Here is a test I add in `RexNodeExtractorTest` 
> which could reproduce the case(Also add a field 'blob' with bytes type in 
> RexNodeTestBase).
> {code:java}
>   @Test
>   def testExtractConditionWithBinaryLiteral(): Unit = {
> // blob
> val t0 = rexBuilder.makeInputRef(allFieldTypes.get(5), 5)
> // X'616263'
> val t1 = rexBuilder.makeBinaryLiteral(ByteString.of("616263", 16))
> // blob = X'616263'
> val a = rexBuilder.makeCall(SqlStdOperatorTable.EQUALS, t0, t1)
> val relBuilder: RexBuilder = new FlinkRexBuilder(typeFactory)
> val (convertedExpressions, unconvertedRexNodes) =
>   extractConjunctiveConditions(a, -1, allFieldNames, relBuilder, 
> functionCatalog)
> val expected: Array[Expression] = Array($"blob" === Array[Byte](97, 98, 
> 99))
> assertExpressionArrayEquals(expected, convertedExpressions)
> assertEquals(0, unconvertedRexNodes.length)
>   }
> {code}
> And here is the exception stack:
> {code:java}
> org.apache.flink.table.api.ValidationException: Data type 'BINARY(3) NOT 
> NULL' with conversion class '[B' does not support a value literal of class 
> 'org.apache.calcite.avatica.util.ByteString'.
>   at 
> org.apache.flink.table.expressions.ValueLiteralExpression.validateValueDataType(ValueLiteralExpression.java:294)
>   at 
> org.apache.flink.table.expressions.ValueLiteralExpression.(ValueLiteralExpression.java:79)
>   at 
> org.apache.flink.table.expressions.ApiExpressionUtils.valueLiteral(ApiExpressionUtils.java:251)
>   at 
> org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitLiteral(RexNodeExtractor.scala:503)
>   at 
> org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitLiteral(RexNodeExtractor.scala:393)
>   at org.apache.calcite.rex.RexLiteral.accept(RexLiteral.java:1217)
>   at 
> org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.$anonfun$visitCall$3(RexNodeExtractor.scala:509)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>   at scala.collection.Iterator.foreach(Iterator.scala:937)
>   at scala.collection.Iterator.foreach$(Iterator.scala:937)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>   at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>   at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:509)
>   at 
> org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:393)
>   at org.apache.calcite.rex.RexCall.accept(RexCall.java:189)
>   at 
> org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.$anonfun$extractConjunctiveConditions$2(RexNodeExtractor.scala:158)
>   at scala.collection.Iterator.foreach(Iterator.scala:937)
>   at scala.collection.Iterator.foreach$(Iterator.scala:937)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>   at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>   at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:157)
>   at 
> org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:119)
>   at 
> org.apache.flink.table.planner.plan.utils.RexNodeExtractorTest.extractConjunctiveConditions(RexNodeExtractorTest.scala:785)
>   at 
> 

Re: [PR] [FLINK-31180] [Test Infrastructure] Fail early when installing minikube and check whether we can retry [flink]

2023-10-22 Thread via GitHub


victor9309 commented on PR #23497:
URL: https://github.com/apache/flink/pull/23497#issuecomment-1774339850

   Thanks @XComp for the review. Thank you very much for your suggestion, The 
current PR code has been submitted on PR #23528, merged together.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33187] using hashcode for parallelism map comparison [flink-kubernetes-operator]

2023-10-22 Thread via GitHub


1996fanrui commented on code in PR #685:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/685#discussion_r1368068093


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/KubernetesAutoScalerEventHandler.java:
##
@@ -43,26 +52,71 @@ public void handleEvent(
 String reason,
 String message,
 @Nullable String messageKey,
-@Nullable Duration interval) {
-if (interval == null) {
-eventRecorder.triggerEvent(
-context.getResource(),
-EventRecorder.Type.valueOf(type.name()),
-reason,
-message,
-EventRecorder.Component.Operator,
-messageKey,
-context.getKubernetesClient());
+@Nonnull Duration interval) {
+eventRecorder.triggerEvent(
+context.getResource(),
+EventRecorder.Type.valueOf(type.name()),
+reason,
+message,
+EventRecorder.Component.Operator,
+messageKey,
+context.getKubernetesClient());
+}
+
+@Override
+public void handleScalingEvent(
+KubernetesJobAutoScalerContext context,
+Map scalingSummaries,
+boolean scaled,
+@Nonnull Duration interval) {
+if (scaled) {
+AutoScalerEventHandler.super.handleScalingEvent(
+context, scalingSummaries, scaled, interval);
 } else {
-eventRecorder.triggerEventByInterval(
+var conf = context.getConfiguration();
+var scalingReport = 
AutoScalerEventHandler.scalingReport(scalingSummaries, scaled);
+var labels = Map.of(PARALLELISM_MAP_KEY, 
getParallelismHashCode(scalingSummaries));

Review Comment:
   cc @gyfora 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32107] [Tests] Kubernetes test failed because ofunable to establish ssl connection to github on AZP [flink]

2023-10-22 Thread via GitHub


victor9309 commented on PR #23528:
URL: https://github.com/apache/flink/pull/23528#issuecomment-1774335390

   Test crictl
   
![image](https://github.com/apache/flink/assets/18453843/f00b8709-a679-4a75-99ae-038b5c7321c7)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32107] [Tests] Kubernetes test failed because ofunable to establish ssl connection to github on AZP [flink]

2023-10-22 Thread via GitHub


victor9309 commented on PR #23528:
URL: https://github.com/apache/flink/pull/23528#issuecomment-1774332772

   Thanks @XComp for the review.  Current PR and PR 
https://github.com/apache/flink/pull/23497 combine submission codes;Use 
RETRY_COUNT and RETRY_BACKOFF_TIME variables


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32309][sql-gateway] Use independent resource manager for table environment [flink]

2023-10-22 Thread via GitHub


FangYongs commented on PR #22768:
URL: https://github.com/apache/flink/pull/22768#issuecomment-1774329086

   Thanks @KarmaGYZ , I will rebase master and work on this issue


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32716) Give 'Default'(or maybe 'None') option for 'scheduler-mode'

2023-10-22 Thread Yao Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778491#comment-17778491
 ] 

Yao Zhang commented on FLINK-32716:
---

Hi [~kination],

The title of your PR now is [FLINK-32176] create default ..., the issue ID 
FLINK-32176 is incorrect.

Please update your commit message with the pattern like '[FLINK-32716] [core] 
your PR description'. Before pushing you need to regenerate the docs by 
following the instruction as the CI report said in flink-docs/README.md.

> Give 'Default'(or maybe 'None') option for 'scheduler-mode'
> ---
>
> Key: FLINK-32716
> URL: https://issues.apache.org/jira/browse/FLINK-32716
> Project: Flink
>  Issue Type: Improvement
>Reporter: Kwangin (Dennis) Jung
>Priority: Minor
>
> By setting-up scheduler-mode as 'REACTIVE', it scales-up/down by computing 
> status.
> [https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#advanced-scheduling-options]
> But currently it only allows 'REACTIVE', and when I want to de-activate with 
> such value as 'None', it causes exception.
> (For now, it causes exception if I setup any other value instead of 
> 'REACTIVE')
>  
> To make configuration bit more flexible, how about give 'None' (or 'Default') 
> as an option, to run in default mode?
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-23598] [core] Fix a bug that caused position to be updated twice when writing a string to a byte array [flink]

2023-10-22 Thread via GitHub


injae-kim commented on code in PR #23563:
URL: https://github.com/apache/flink/pull/23563#discussion_r1367994297


##
flink-core/src/main/java/org/apache/flink/core/memory/DataOutputSerializer.java:
##
@@ -160,7 +160,6 @@ public void writeBytes(String s) throws IOException {
 for (int i = 0; i < sLen; i++) {
 writeByte(s.charAt(i));
 }
-this.position += sLen;

Review Comment:
   ```java
   @Override
   public void write(int b) throws IOException {
   if (this.position >= this.buffer.length) {
   resize(1);
   }
   this.buffer[this.position++] = (byte) (b & 0xff); // here! 
position++;
   }
   ```
   FYI, inside of `writeByte` it already increment position, so above `L163` 
updates position twice!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32940) Support projection pushdown to table source for column projections through UDTF

2023-10-22 Thread Jeyhun Karimov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778478#comment-17778478
 ] 

Jeyhun Karimov commented on FLINK-32940:


Hi [~vsowrirajan] [~337361...@qq.com] [~lsy] [~jark], throwing my two cents 
here:

Adding _CoreRules.ProjectCorrelateTransposeRule_ is not enough to solve the 
problem because of several reasons:

* Calcite will add two projections (one for the left and one for the right 
input) [1]. Sometimes some of these projections can be no-op (e.g, without 
expressions). This will cause null reference error in 
_BatchPhysicalCorrelateRule.scala: 67_ 
(_Some(calc.getProgram.expandLocalRef(calc.getProgram.getCondition))_). That is 
why probably you get this error. 
* However, solving the above issue is probably not enough to get this rule 
working, mainly because how _CoreRules.ProjectCorrelateTransposeRule_ works. 
Basically, this rule pushes down projects, without further handling/correcting 
the references (e.g., LogicalTableFunctionScan will have stale function 
invocation expression - getCall()). As a result, LogicalTableFunctionScan will 
try to access some field, however this field is already projected by Calcite 
rule (there is a LogicalProject operator(s) on top). 
* The above issue will get even complicated, when there are more operators 
(e.g., filters and projections) which has dangling references after Calcite 
rule is applied or many nested fields are accessed (this will result in 
LogicalCorrelate operators nested in each other)


  
About solution, IMO we should either:
# Create a rule that inherits from _CoreRules.ProjectCorrelateTransposeRule_ 
and overrides its _onMatch_ method. We should gracefully handle the downstream 
tree of operators when pushing down projections down to the LogicalCorrelate.
# Alternatively, we can use _CoreRules.ProjectCorrelateTransposeRule_ and our 
own rule to match


{code:java}
+- LogicalCorrelate
   :- LogicalProject
{code}

We cannot force matching LogicalTableFunctionScan or LogicalTableScan because 
dangling references can be in anywhere of the query plan. We need to 1) find 
all RexCall fields of LogicalTableFunctionScan, 2) check if they exists after 
projection pushdown, 3) if not, find to which [new] project expressions they 
correspond, and 4) rewire them. This potentially requires to rewrite 
expressions thought the whole query plan until the leaf node. 

Also, we do not need to merge LogicalProject and LogicalTableScan as part of 
this rule, since other rules will already do it. 

What do you guys think?

[1] 
https://github.com/apache/calcite/blob/c83ac69111fd9e75af5e3615af29a72284667a4a/core/src/main/java/org/apache/calcite/rel/rules/ProjectCorrelateTransposeRule.java#L126

> Support projection pushdown to table source for column projections through 
> UDTF
> ---
>
> Key: FLINK-32940
> URL: https://issues.apache.org/jira/browse/FLINK-32940
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Venkata krishnan Sowrirajan
>Priority: Major
>
> Currently, Flink doesn't push down columns projected through UDTF like 
> _UNNEST_ to the table source.
> For eg:
> {code:java}
> SELECT t1.deptno, t2.ename FROM db.dept_nested t1, UNNEST(t1.employees) AS 
> t2{code}
> For the above SQL, Flink projects all the columns for DEPT_NESTED rather than 
> only _name_ and {_}employees{_}. If the table source supports nested fields 
> column projection, ideally it should project only _t1.employees.ename_ from 
> the table source.
> Query plan:
> {code:java}
> == Abstract Syntax Tree ==
> LogicalProject(deptno=[$0], ename=[$5])
> +- LogicalCorrelate(correlation=[$cor1], joinType=[inner], 
> requiredColumns=[{3}])
>    :- LogicalTableScan(table=[[hive_catalog, db, dept_nested]])
>    +- Uncollect
>       +- LogicalProject(employees=[$cor1.employees])
>          +- LogicalValues(tuples=[[{ 0 }]]){code}
> {code:java}
> == Optimized Physical Plan ==
> Calc(select=[deptno, ename])
> +- Correlate(invocation=[$UNNEST_ROWS$1($cor1.employees)], 
> correlate=[table($UNNEST_ROWS$1($cor1.employees))], 
> select=[deptno,name,skillrecord,employees,empno,ename,skills], 
> rowType=[RecordType(BIGINT deptno, VARCHAR(2147483647) name, 
> RecordType:peek_no_expand(VARCHAR(2147483647) skilltype, VARCHAR(2147483647) 
> desc, RecordType:peek_no_expand(VARCHAR(2147483647) a, VARCHAR(2147483647) b) 
> others) skillrecord, RecordType:peek_no_expand(BIGINT empno, 
> VARCHAR(2147483647) ename, RecordType:peek_no_expand(VARCHAR(2147483647) 
> type, VARCHAR(2147483647) desc, RecordType:peek_no_expand(VARCHAR(2147483647) 
> a, VARCHAR(2147483647) b) others) ARRAY skills) ARRAY employees, BIGINT 
> empno, VARCHAR(2147483647) ename, 
> RecordType:peek_no_expand(VARCHAR(2147483647) type, 

Re: [PR] Draft: FLINK-28229 (CI) [flink]

2023-10-22 Thread via GitHub


afedulov commented on PR #23558:
URL: https://github.com/apache/flink/pull/23558#issuecomment-1774210195

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-33310) Scala before 2.12.18 doesn't compile on Java 21

2023-10-22 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33310.
-
Fix Version/s: 1.19.0
   Resolution: Fixed

> Scala before 2.12.18 doesn't compile on Java 21
> ---
>
> Key: FLINK-33310
> URL: https://issues.apache.org/jira/browse/FLINK-33310
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> It fails with
> {noformat}
> [ERROR] error:
> [INFO]   bad constant pool index: 0 at pos: 48445
> [INFO]  while compiling: 
> [INFO] during phase: globalPhase=, enteringPhase= phase>
> [INFO]  library version: version 2.12.15
> [INFO] compiler version: version 2.12.15
> ...
> [INFO]   last tree to typer: EmptyTree
> [INFO]tree position: 
> [INFO] tree tpe: 
> [INFO]   symbol: null
> [INFO]call site:  in 
> [INFO] 
> [INFO] == Source file context for tree position ==
> {noformat}
> based on release notes 2.12.18 - the first 2.12.x supporting jdk21
> https://github.com/scala/scala/releases/tag/v2.12.18



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33310) Scala before 2.12.18 doesn't compile on Java 21

2023-10-22 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-33310.
---

> Scala before 2.12.18 doesn't compile on Java 21
> ---
>
> Key: FLINK-33310
> URL: https://issues.apache.org/jira/browse/FLINK-33310
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> It fails with
> {noformat}
> [ERROR] error:
> [INFO]   bad constant pool index: 0 at pos: 48445
> [INFO]  while compiling: 
> [INFO] during phase: globalPhase=, enteringPhase= phase>
> [INFO]  library version: version 2.12.15
> [INFO] compiler version: version 2.12.15
> ...
> [INFO]   last tree to typer: EmptyTree
> [INFO]tree position: 
> [INFO] tree tpe: 
> [INFO]   symbol: null
> [INFO]call site:  in 
> [INFO] 
> [INFO] == Source file context for tree position ==
> {noformat}
> based on release notes 2.12.18 - the first 2.12.x supporting jdk21
> https://github.com/scala/scala/releases/tag/v2.12.18



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33310) Scala before 2.12.18 doesn't compile on Java 21

2023-10-22 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778464#comment-17778464
 ] 

Sergey Nuyanzin commented on FLINK-33310:
-

Merged as 
[6cdfca2a6aba1a387b866d9358df9c1ee7f1c138|https://github.com/apache/flink/commit/6cdfca2a6aba1a387b866d9358df9c1ee7f1c138]

> Scala before 2.12.18 doesn't compile on Java 21
> ---
>
> Key: FLINK-33310
> URL: https://issues.apache.org/jira/browse/FLINK-33310
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> It fails with
> {noformat}
> [ERROR] error:
> [INFO]   bad constant pool index: 0 at pos: 48445
> [INFO]  while compiling: 
> [INFO] during phase: globalPhase=, enteringPhase= phase>
> [INFO]  library version: version 2.12.15
> [INFO] compiler version: version 2.12.15
> ...
> [INFO]   last tree to typer: EmptyTree
> [INFO]tree position: 
> [INFO] tree tpe: 
> [INFO]   symbol: null
> [INFO]call site:  in 
> [INFO] 
> [INFO] == Source file context for tree position ==
> {noformat}
> based on release notes 2.12.18 - the first 2.12.x supporting jdk21
> https://github.com/scala/scala/releases/tag/v2.12.18



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33310) Scala before 2.12.18 doesn't compile on Java 21

2023-10-22 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-33310:
---

Assignee: Sergey Nuyanzin

> Scala before 2.12.18 doesn't compile on Java 21
> ---
>
> Key: FLINK-33310
> URL: https://issues.apache.org/jira/browse/FLINK-33310
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> It fails with
> {noformat}
> [ERROR] error:
> [INFO]   bad constant pool index: 0 at pos: 48445
> [INFO]  while compiling: 
> [INFO] during phase: globalPhase=, enteringPhase= phase>
> [INFO]  library version: version 2.12.15
> [INFO] compiler version: version 2.12.15
> ...
> [INFO]   last tree to typer: EmptyTree
> [INFO]tree position: 
> [INFO] tree tpe: 
> [INFO]   symbol: null
> [INFO]call site:  in 
> [INFO] 
> [INFO] == Source file context for tree position ==
> {noformat}
> based on release notes 2.12.18 - the first 2.12.x supporting jdk21
> https://github.com/scala/scala/releases/tag/v2.12.18



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33310] Scala before 2.12.18 doesn't compile on Java 21 [flink]

2023-10-22 Thread via GitHub


snuyanzin merged PR #23548:
URL: https://github.com/apache/flink/pull/23548


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33310] Scala before 2.12.18 doesn't compile on Java 21 [flink]

2023-10-22 Thread via GitHub


snuyanzin commented on PR #23548:
URL: https://github.com/apache/flink/pull/23548#issuecomment-1774179490

   thanks for taking a look


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33318) Create TimeMsPerSec metrics for each collector and add metrics across all collectors

2023-10-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-33318.
--
Resolution: Fixed

merged to master 76dda5d3cf7c4ecf255370679b27e11dd974a293

> Create TimeMsPerSec metrics for each collector and add metrics across all 
> collectors
> 
>
> Key: FLINK-33318
> URL: https://issues.apache.org/jira/browse/FLINK-33318
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Add the new TimeMsPerSec GC metrics per collector.
> Add a new `All` group and aggregate GC metrics across collectors



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33318] Expose aggregated collector metrics and measure timeMsPerSecond [flink]

2023-10-22 Thread via GitHub


gyfora merged PR #23554:
URL: https://github.com/apache/flink/pull/23554


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33335) Remove unused e2e tests

2023-10-22 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov updated FLINK-5:
--
Description: 
FLINK-17375 removed _run-pre-commit-tests.sh_ in Flink 1.12 [1]. Since then the 
following tests are not executed anymore:
_test_state_migration.sh_
_test_state_evolution.sh_
_test_streaming_kinesis.sh_
_test_streaming_classloader.sh_
_test_streaming_distributed_cache_via_blob.sh_

[1]   
https://github.com/apache/flink/pull/12268/files#diff-39f0aea40d2dd3f026544bb4c2502b2e9eab4c825df5f2b68c6d4ca8c39d7b5e

  was:
FLINK-17375 removed _run-pre-commit-tests.sh_ in Flink 1.12 [1]. Since then the 
following tests are not executed anymore:
_test_state_migration.sh_
_test_state_evolution.sh_
_test_streaming_kinesis.sh_
_test_streaming_classloader.sh_
_test_streaming_distributed_cache_via_blob.sh_

[1]https://github.com/apache/flink/pull/12268/files#diff-39f0aea40d2dd3f026544bb4c2502b2e9eab4c825df5f2b68c6d4ca8c39d7b5e


> Remove unused e2e tests
> ---
>
> Key: FLINK-5
> URL: https://issues.apache.org/jira/browse/FLINK-5
> Project: Flink
>  Issue Type: Improvement
>Reporter: Alexander Fedulov
>Assignee: Alexander Fedulov
>Priority: Major
>
> FLINK-17375 removed _run-pre-commit-tests.sh_ in Flink 1.12 [1]. Since then 
> the following tests are not executed anymore:
> _test_state_migration.sh_
> _test_state_evolution.sh_
> _test_streaming_kinesis.sh_
> _test_streaming_classloader.sh_
> _test_streaming_distributed_cache_via_blob.sh_
> [1]   
> https://github.com/apache/flink/pull/12268/files#diff-39f0aea40d2dd3f026544bb4c2502b2e9eab4c825df5f2b68c6d4ca8c39d7b5e



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33335) Remove unused e2e tests

2023-10-22 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778126#comment-17778126
 ] 

Alexander Fedulov edited comment on FLINK-5 at 10/22/23 10:07 AM:
--

[~rmetzger] [~chesnay] 
Could you please confirm that disabling of the aforementioned tests was 
intentional?
I would like to understand if we can simply drop 
_org.apache.flink.test.StatefulStreamingJob_ without moving it to a FLIP-27 
source.

 


was (Author: afedulov):
[~rmetzger] [~chesnay] 
Could you please confirm that disabling of the aforementioned tests was 
intentional?
I would like to understand if we can simply drop 
`org.apache.flink.test.StatefulStreamingJob` without moving it to a FLIP-27 
source.

 

> Remove unused e2e tests
> ---
>
> Key: FLINK-5
> URL: https://issues.apache.org/jira/browse/FLINK-5
> Project: Flink
>  Issue Type: Improvement
>Reporter: Alexander Fedulov
>Assignee: Alexander Fedulov
>Priority: Major
>
> FLINK-17375 removed _run-pre-commit-tests.sh_ in Flink 1.12 [1]. Since then 
> the following tests are not executed anymore:
> _test_state_migration.sh_
> _test_state_evolution.sh_
> _test_streaming_kinesis.sh_
> _test_streaming_classloader.sh_
> _test_streaming_distributed_cache_via_blob.sh_
> [1]https://github.com/apache/flink/pull/12268/files#diff-39f0aea40d2dd3f026544bb4c2502b2e9eab4c825df5f2b68c6d4ca8c39d7b5e



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33138) Prometheus Connector Sink - DataStream API implementation

2023-10-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33138:
---
Labels: pull-request-available  (was: )

> Prometheus Connector Sink - DataStream API implementation
> -
>
> Key: FLINK-33138
> URL: https://issues.apache.org/jira/browse/FLINK-33138
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Prometheus
>Reporter: Lorenzo Nicora
>Assignee: Lorenzo Nicora
>Priority: Major
>  Labels: pull-request-available
> Fix For: prometheus-connector-1.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33138] DataStream API implementation [flink-connector-prometheus]

2023-10-22 Thread via GitHub


nicusX opened a new pull request, #1:
URL: https://github.com/apache/flink-connector-prometheus/pull/1

   ## Purpose of the change
   
   DataStream API implementation
   
   Also includes:
   * AMP Request Signed implementation 
([FLINK-33141](https://issues.apache.org/jira/browse/FLINK-33141))
   * Sample application, running on Amazon Managed Service for Apache Flink and 
writing to Amazon Managed Prometheus 
   
   ## Verifying this change
   
   * Unit tests
   * Run included sample application on Amazon Managed Service for Apache Flink
   
   ### Significant changes
   
   (Please check any boxes [x] if the answer is "yes" )
   
* [x] Dependencies have been added or upgraded
* [x] Public API has been changed (Public API is any class annotated with 
@Public(Evolving))
* [x] Serializers have been changed
* [x] New feature has been introduce
* If yes, how is this documented? JavaDocs and README.md


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31180] [Test Infrastructure] Fail early when installing minikube and check whether we can retry [flink]

2023-10-22 Thread via GitHub


victor9309 commented on PR #23497:
URL: https://github.com/apache/flink/pull/23497#issuecomment-1774007259

   Thanks @XComp for the review. I am very sorry that I tested it before. Thank 
you very much for your suggestion. I have modified it
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31180] [Test Infrastructure] Fail early when installing minikube and check whether we can retry [flink]

2023-10-22 Thread via GitHub


victor9309 commented on PR #23497:
URL: https://github.com/apache/flink/pull/23497#issuecomment-1774004090

   Thanks @XComp for the review. I am very sorry that I only measured the 
timeout phenomenon before. Thank you very much for your suggestion and I have 
revised it


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org