[GitHub] [flink-kubernetes-operator] SteNicholas commented on pull request #247: [FLINK-27668] Document dynamic operator configuration

2022-05-27 Thread GitBox


SteNicholas commented on PR #247:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/247#issuecomment-1140177723

   @gyfora , PTAL.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27668) Document dynamic operator configuration

2022-05-27 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543414#comment-17543414
 ] 

Nicholas Jiang commented on FLINK-27668:


[~gyfora], could you please assign this ticket to me? I have pushed a pull 
request for document.

> Document dynamic operator configuration
> ---
>
> Key: FLINK-27668
> URL: https://issues.apache.org/jira/browse/FLINK-27668
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> The Kubernetes operator now support dynamic config changes through the 
> operator configmap.
> This feature is not documented properly and it should be added to the 
> operations/configuration page



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27668) Document dynamic operator configuration

2022-05-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27668:
---
Labels: pull-request-available  (was: )

> Document dynamic operator configuration
> ---
>
> Key: FLINK-27668
> URL: https://issues.apache.org/jira/browse/FLINK-27668
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> The Kubernetes operator now support dynamic config changes through the 
> operator configmap.
> This feature is not documented properly and it should be added to the 
> operations/configuration page



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-kubernetes-operator] SteNicholas opened a new pull request, #247: [FLINK-27668] Document dynamic operator configuration

2022-05-27 Thread GitBox


SteNicholas opened a new pull request, #247:
URL: https://github.com/apache/flink-kubernetes-operator/pull/247

   The Kubernetes operator now support dynamic config changes through the 
operator ConfigMap.This feature is not documented properly and it should be 
added to the operations/configuration page.
   
   **The brief change log**
   
   - Document dynamic operator configuration in the `operations/configuration` 
page.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27009) Support SQL job submission in flink kubernetes opeartor

2022-05-27 Thread Biao Geng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543405#comment-17543405
 ] 

Biao Geng commented on FLINK-27009:
---

hi [~mbalassi] thanks for paying attention. I have already started working on 
this. After some offline discussion with [~wangyang0918] and [~dianfu], we have 
some alternative solutions and now prefer to do some work on 
[FLINK-26541|https://issues.apache.org/jira/browse/FLINK-26541] first. I am 
doing some basic verfication work to make sure the solution is feasible for our 
k8s operator and once it is ready, I will present the full FLIP for the 
community to discuss. Due to my current progress, I hope it will be proposed in 
next Friday.

> Support SQL job submission in flink kubernetes opeartor
> ---
>
> Key: FLINK-27009
> URL: https://issues.apache.org/jira/browse/FLINK-27009
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Biao Geng
>Priority: Major
>
> Currently, the flink kubernetes opeartor is for jar job using application or 
> session cluster. For SQL job, there is no out of box solution in the 
> operator.  
> One simple and short-term solution is to wrap the SQL script into a jar job 
> using table API with limitation.
> The long-term solution may work with 
> [FLINK-26541|https://issues.apache.org/jira/browse/FLINK-26541] to achieve 
> the full support.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27822) Translate the doc of checkpoint/savepoint guarantees

2022-05-27 Thread fanrui (Jira)
fanrui created FLINK-27822:
--

 Summary: Translate the doc of checkpoint/savepoint guarantees
 Key: FLINK-27822
 URL: https://issues.apache.org/jira/browse/FLINK-27822
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation, Documentation
Affects Versions: 1.15.0, 1.16.0
Reporter: fanrui
 Fix For: 1.16.0, 1.15.1


Translate the change of FLINK-26134 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] (FLINK-26937) Introduce Leveled compression for LSM

2022-05-27 Thread liwei li (Jira)


[ https://issues.apache.org/jira/browse/FLINK-26937 ]


liwei li deleted comment on FLINK-26937:
--

was (Author: liliwei):
may i have a try?

> Introduce Leveled compression for LSM
> -
>
> Key: FLINK-26937
> URL: https://issues.apache.org/jira/browse/FLINK-26937
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: table-store-0.2.0
>
>
> Currently ORC is all ZLIB compression by default, in fact the files at level 
> 0, will definitely be rewritten and we can have different compression for 
> different levels.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27788) Adding annotation to k8 operator Pod

2022-05-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27788:
---
Labels: pull-request-available  (was: )

> Adding annotation to k8 operator Pod
> 
>
> Key: FLINK-27788
> URL: https://issues.apache.org/jira/browse/FLINK-27788
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-0.1.0, kubernetes-operator-1.1.0
>Reporter: Jaganathan Asokan
>Assignee: Jaganathan Asokan
>Priority: Minor
>  Labels: pull-request-available
>
> Currently we lack the option to natively add annotations on flink operator 
> pods. Providing this feature directly on our existing helm chart, could be 
> useful. One potential use-case for allowing annotations on Pod is to enable 
> scrapping of opertor metrics by monitoring Infrastructure like Prometheus , 
> Datadog etc.  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-kubernetes-operator] mbalassi commented on a diff in pull request #246: [FLINK-27788] Adding annotation to k8 operator Pod

2022-05-27 Thread GitBox


mbalassi commented on code in PR #246:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/246#discussion_r884046691


##
helm/flink-kubernetes-operator/templates/flink-operator.yaml:
##
@@ -34,6 +34,12 @@ spec:
 {{- include "flink-operator.selectorLabels" . | nindent 8 }}
   annotations:
 kubectl.kubernetes.io/default-container: {{ .Chart.Name }}
+  {{- $keyExist := .Values.operatorPodDeployment | default dict -}}

Review Comment:
   This could be just `if .Values.operatorPodDeployment.annotations`, right?
   We are using the same convention throughout the helm chart.



##
helm/flink-kubernetes-operator/values.yaml:
##
@@ -30,6 +30,9 @@ image:
 rbac:
   create: true
 
+operatorPodDeployment:

Review Comment:
   I find the name confusing. Is it for the pod or the deployment?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27009) Support SQL job submission in flink kubernetes opeartor

2022-05-27 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-27009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543358#comment-17543358
 ] 

Márton Balassi commented on FLINK-27009:


Hi [~bgeng777], do you mind if I take a stab at this or are you already working 
on it?

> Support SQL job submission in flink kubernetes opeartor
> ---
>
> Key: FLINK-27009
> URL: https://issues.apache.org/jira/browse/FLINK-27009
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Biao Geng
>Priority: Major
>
> Currently, the flink kubernetes opeartor is for jar job using application or 
> session cluster. For SQL job, there is no out of box solution in the 
> operator.  
> One simple and short-term solution is to wrap the SQL script into a jar job 
> using table API with limitation.
> The long-term solution may work with 
> [FLINK-26541|https://issues.apache.org/jira/browse/FLINK-26541] to achieve 
> the full support.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Closed] (FLINK-27506) update playgrounds for Flink 1.14

2022-05-27 Thread David Anderson (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Anderson closed FLINK-27506.
--
Fix Version/s: 1.14.4
   Resolution: Fixed

> update playgrounds for Flink 1.14
> -
>
> Key: FLINK-27506
> URL: https://issues.apache.org/jira/browse/FLINK-27506
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation / Training / Exercises
>Affects Versions: 1.14.4
>Reporter: David Anderson
>Priority: Major
>  Labels: starter
> Fix For: 1.14.4
>
>
> All of the flink-playgrounds need to be updated for 1.14.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27813) java.lang.IllegalStateException: after migration from statefun-3.1.1 to 3.2.0

2022-05-27 Thread Galen Warren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543099#comment-17543099
 ] 

Galen Warren commented on FLINK-27813:
--

Are you using RequestReplyFunctionBuilder to build your statefun job?

> java.lang.IllegalStateException: after migration from statefun-3.1.1 to 3.2.0
> -
>
> Key: FLINK-27813
> URL: https://issues.apache.org/jira/browse/FLINK-27813
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Affects Versions: statefun-3.2.0
>Reporter: Oleksandr
>Priority: Blocker
> Attachments: screenshot-1.png
>
>
> Issue was met after migration from 
> flink-statefun:3.1.1-java11
> to
> flink-statefun:3.2.0-java11
>  
> {code:java}
> ts.execution-paymentManualValidationRequestEgress-egress, Sink: 
> ua.aval.payments.execution-norkomExecutionRequested-egress, Sink: 
> ua.aval.payments.execution-shareStatusToManualValidation-egress) (1/1)#15863 
> (98a1f6bef9a435ef9d0292fefeb64022) switched from RUNNING to FAILED with 
> failure cause: java.lang.IllegalStateException: Unable to parse Netty 
> transport spec.\n\tat 
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.parseTransportSpec(NettyRequestReplyClientFactory.java:56)\n\tat
>  
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.createTransportClient(NettyRequestReplyClientFactory.java:39)\n\tat
>  
> org.apache.flink.statefun.flink.core.httpfn.HttpFunctionProvider.functionOfType(HttpFunctionProvider.java:49)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:70)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:47)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.load(StatefulFunctionRepository.java:71)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.get(StatefulFunctionRepository.java:59)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.LocalFunctionGroup.enqueue(LocalFunctionGroup.java:48)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.Reductions.enqueue(Reductions.java:153)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.Reductions.apply(Reductions.java:148)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.FunctionGroupOperator.processElement(FunctionGroupOperator.java:90)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)\n\tat
>  
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)\n\tat
>  
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)\n\tat
>  
> org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.sendDownstream(FeedbackUnionOperator.java:180)\n\tat
>  
> org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.processElement(FeedbackUnionOperator.java:86)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)\n\tat
>  
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)\n\tat
>  
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)\n\tat
>  
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:496)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)\n\tat
>  
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)\n\tat
>  
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937)\n\tat
>  org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)\n\tat 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)\n\tat 
> java.base/java.lang.Thread.run(Unknown Source)\nCaused by: 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
>  Time interval unit label 'm' does not match any of the recognized units: 
> DAYS: (d | day | days), HOURS: (h | hour | hours), 

[jira] [Commented] (FLINK-27813) java.lang.IllegalStateException: after migration from statefun-3.1.1 to 3.2.0

2022-05-27 Thread Oleksandr (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543094#comment-17543094
 ] 

Oleksandr commented on FLINK-27813:
---

[~galenwarren]  we have such specs for example:
{code:java}
public static final ValueSpec ORIGINAL_CALLER = 
ValueSpec.named("originalCaller")
.thatExpireAfterWrite(ChronoUnit.MONTHS.getDuration())
.withUtf8StringType();
public static final ValueSpec CALLER_ID = ValueSpec.named("callerId")
.thatExpireAfterWrite(ChronoUnit.MONTHS.getDuration())
.withUtf8StringType();
public static final ValueSpec RESPONSE_STORAGE = 
ValueSpec.named("SightResponse")
.thatExpireAfterWrite(ChronoUnit.MONTHS.getDuration())
.withCustomType(SightValidationResponse.TYPE);
public static final ValueSpec ACT_RESPONSE_STORAGE = 
ValueSpec.named("ActResponse")
.thatExpireAfterWrite(ChronoUnit.MONTHS.getDuration())
.withCustomType(ProtobufTypes.sepAckType());
public static final ValueSpec REQUEST_ATTEMPT = 
ValueSpec.named("attempt")
.thatExpireAfterWrite(ChronoUnit.HOURS.getDuration())
.withIntType();
public static final ValueSpec RETRYABLE_RUNNER_VALUE_SPEC = 
ValueSpec
.named("retryableRunner")
.withCustomType(RetryableRunner.TYPE);
public static final ValueSpec VALIDATION_IN_PROGRESS = 
ValueSpec.named("MonitoringValidationInProgress")
.thatExpireAfterWrite(ChronoUnit.WEEKS.getDuration())
.withBooleanType();{code}

> java.lang.IllegalStateException: after migration from statefun-3.1.1 to 3.2.0
> -
>
> Key: FLINK-27813
> URL: https://issues.apache.org/jira/browse/FLINK-27813
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Affects Versions: statefun-3.2.0
>Reporter: Oleksandr
>Priority: Blocker
> Attachments: screenshot-1.png
>
>
> Issue was met after migration from 
> flink-statefun:3.1.1-java11
> to
> flink-statefun:3.2.0-java11
>  
> {code:java}
> ts.execution-paymentManualValidationRequestEgress-egress, Sink: 
> ua.aval.payments.execution-norkomExecutionRequested-egress, Sink: 
> ua.aval.payments.execution-shareStatusToManualValidation-egress) (1/1)#15863 
> (98a1f6bef9a435ef9d0292fefeb64022) switched from RUNNING to FAILED with 
> failure cause: java.lang.IllegalStateException: Unable to parse Netty 
> transport spec.\n\tat 
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.parseTransportSpec(NettyRequestReplyClientFactory.java:56)\n\tat
>  
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.createTransportClient(NettyRequestReplyClientFactory.java:39)\n\tat
>  
> org.apache.flink.statefun.flink.core.httpfn.HttpFunctionProvider.functionOfType(HttpFunctionProvider.java:49)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:70)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:47)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.load(StatefulFunctionRepository.java:71)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.get(StatefulFunctionRepository.java:59)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.LocalFunctionGroup.enqueue(LocalFunctionGroup.java:48)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.Reductions.enqueue(Reductions.java:153)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.Reductions.apply(Reductions.java:148)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.FunctionGroupOperator.processElement(FunctionGroupOperator.java:90)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)\n\tat
>  
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)\n\tat
>  
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)\n\tat
>  
> org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.sendDownstream(FeedbackUnionOperator.java:180)\n\tat
>  
> org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.processElement(FeedbackUnionOperator.java:86)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)\n\tat
>  
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)\n\tat
>  
> 

[jira] [Updated] (FLINK-27813) java.lang.IllegalStateException: after migration from statefun-3.1.1 to 3.2.0

2022-05-27 Thread Oleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr updated FLINK-27813:
--
Description: 
Issue was met after migration from 

flink-statefun:3.1.1-java11

to

flink-statefun:3.2.0-java11

 
{code:java}
ts.execution-paymentManualValidationRequestEgress-egress, Sink: 
ua.aval.payments.execution-norkomExecutionRequested-egress, Sink: 
ua.aval.payments.execution-shareStatusToManualValidation-egress) (1/1)#15863 
(98a1f6bef9a435ef9d0292fefeb64022) switched from RUNNING to FAILED with failure 
cause: java.lang.IllegalStateException: Unable to parse Netty transport 
spec.\n\tat 
org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.parseTransportSpec(NettyRequestReplyClientFactory.java:56)\n\tat
 
org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.createTransportClient(NettyRequestReplyClientFactory.java:39)\n\tat
 
org.apache.flink.statefun.flink.core.httpfn.HttpFunctionProvider.functionOfType(HttpFunctionProvider.java:49)\n\tat
 
org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:70)\n\tat
 
org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:47)\n\tat
 
org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.load(StatefulFunctionRepository.java:71)\n\tat
 
org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.get(StatefulFunctionRepository.java:59)\n\tat
 
org.apache.flink.statefun.flink.core.functions.LocalFunctionGroup.enqueue(LocalFunctionGroup.java:48)\n\tat
 
org.apache.flink.statefun.flink.core.functions.Reductions.enqueue(Reductions.java:153)\n\tat
 
org.apache.flink.statefun.flink.core.functions.Reductions.apply(Reductions.java:148)\n\tat
 
org.apache.flink.statefun.flink.core.functions.FunctionGroupOperator.processElement(FunctionGroupOperator.java:90)\n\tat
 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)\n\tat
 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)\n\tat
 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)\n\tat
 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)\n\tat
 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)\n\tat
 
org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.sendDownstream(FeedbackUnionOperator.java:180)\n\tat
 
org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.processElement(FeedbackUnionOperator.java:86)\n\tat
 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)\n\tat
 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)\n\tat
 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)\n\tat
 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)\n\tat
 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:496)\n\tat
 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)\n\tat
 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)\n\tat
 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)\n\tat
 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)\n\tat
 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937)\n\tat 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)\n\tat 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)\n\tat 
java.base/java.lang.Thread.run(Unknown Source)\nCaused by: 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
 Time interval unit label 'm' does not match any of the recognized units: DAYS: 
(d | day | days), HOURS: (h | hour | hours), MINUTES: (min | minute | minutes), 
SECONDS: (s | sec | secs | second | seconds), MILLISECONDS: (ms | milli | 
millis | millisecond | milliseconds), MICROSECONDS: (µs | micro | micros | 
microsecond | microseconds), NANOSECONDS: (ns | nano | nanos | nanosecond | 
nanoseconds) (through reference chain: 
org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplySpec[\"timeouts\"]->org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplySpec$Timeouts[\"call\"])\n\tat
 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:390)\n\tat
 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:349)\n\tat
 

[jira] [Closed] (FLINK-27818) Model enums as references in OpenAPI spec

2022-05-27 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-27818.

Resolution: Fixed

master: e7834d8b56a7b4b1c2674ab399032e21e76d9e63

> Model enums as references in OpenAPI spec
> -
>
> Key: FLINK-27818
> URL: https://issues.apache.org/jira/browse/FLINK-27818
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / REST
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] zentol merged pull request #19835: [FLINK-27818][rest][docs] Model enums as references

2022-05-27 Thread GitBox


zentol merged PR #19835:
URL: https://github.com/apache/flink/pull/19835


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27734) Not showing checkpoint interval properly in WebUI when checkpoint is disabled

2022-05-27 Thread EMing Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17543006#comment-17543006
 ] 

EMing Zhou commented on FLINK-27734:


Hi [~Feifan Wang] 

   I also found that it may be due to an additional = sign. I am going to test 
it and submit a PR to deal with this issue. Since you think so, don't forget 
that 1.15 and 1.16 versions need to submit two requests. Thank you

> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> --
>
> Key: FLINK-27734
> URL: https://issues.apache.org/jira/browse/FLINK-27734
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.0
>Reporter: Feifan Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-05-22-23-42-46-365.png
>
>
> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> !image-2022-05-22-23-42-46-365.png|width=1019,height=362!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] flinkbot commented on pull request #19837: [FLINK-24490][docs] The sample code is wrong in Apache Kafka Connecto…

2022-05-27 Thread GitBox


flinkbot commented on PR #19837:
URL: https://github.com/apache/flink/pull/19837#issuecomment-1139747723

   
   ## CI report:
   
   * 0444a912f1d97bc13f7ea5747aae4c52c93e7fa4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] JasonLeeCoding opened a new pull request, #19837: [FLINK-24490][docs] The sample code is wrong in Apache Kafka Connecto…

2022-05-27 Thread GitBox


JasonLeeCoding opened a new pull request, #19837:
URL: https://github.com/apache/flink/pull/19837

   
   ## What is the purpose of the change
   
   *Modify the code error in the document*
   
   
   ## Brief change log
   
   *Modify the code error in the document*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27804) Do not observe cluster/job mid upgrade

2022-05-27 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542984#comment-17542984
 ] 

Gyula Fora commented on FLINK-27804:


The problem is that I cannot reliably say what the root cause was and whether 
this will solve it 100%

We will keep testing this today and if we don't hit this again during another 
day of functional testing I think it would be better to create an RC3 including 
this fix.

> Do not observe cluster/job mid upgrade
> --
>
> Key: FLINK-27804
> URL: https://issues.apache.org/jira/browse/FLINK-27804
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.0.0
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Critical
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> Seems like in some weird cornercases when we observe the FINISHED job 
> (stopped with savepoint) during an upgrade the recorded last snapshot is 
> incorrect (still need to investigate if this is due to a Flink problem or 
> what) This can lead to upgrade errors.
> This can be avoided by simply skipping the observe step when the 
> reconciliation status is UPGRADING because at that point we actually know 
> that the job was already shut down and state recorded correctly in the 
> savepoint info.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-27734) Not showing checkpoint interval properly in WebUI when checkpoint is disabled

2022-05-27 Thread Feifan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542979#comment-17542979
 ] 

Feifan Wang edited comment on FLINK-27734 at 5/27/22 3:34 PM:
--

Thanks for reply [~Zsigner] , I want just remove single quotes around 
0x7fff firstly , but I find the lint-staged will change 
"0x7fff" to  "0; x7fff". Is there some advice 
suggestion for deal with it better ?


was (Author: feifan wang):
Thanks for reply [~Zsigner] , I want just remove single quotes around 
0x7 firstly , but I find the lint-staged will change 
"0x7" to  "0; x7". Is there some advice 
suggestion for deal with it better ?

> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> --
>
> Key: FLINK-27734
> URL: https://issues.apache.org/jira/browse/FLINK-27734
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.0
>Reporter: Feifan Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-05-22-23-42-46-365.png
>
>
> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> !image-2022-05-22-23-42-46-365.png|width=1019,height=362!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27734) Not showing checkpoint interval properly in WebUI when checkpoint is disabled

2022-05-27 Thread Feifan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542983#comment-17542983
 ] 

Feifan Wang commented on FLINK-27734:
-

[~Zsigner] , I think the problem is strict equals operator ( === ) require same 
type for return true.

> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> --
>
> Key: FLINK-27734
> URL: https://issues.apache.org/jira/browse/FLINK-27734
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.0
>Reporter: Feifan Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-05-22-23-42-46-365.png
>
>
> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> !image-2022-05-22-23-42-46-365.png|width=1019,height=362!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27734) Not showing checkpoint interval properly in WebUI when checkpoint is disabled

2022-05-27 Thread Feifan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542979#comment-17542979
 ] 

Feifan Wang commented on FLINK-27734:
-

Thanks for reply [~Zsigner] , I want just remove single quotes around 
0x7 firstly , but I find the lint-staged will change 
"0x7" to  "0; x7". Is there some advice 
suggestion for deal with it better ?

> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> --
>
> Key: FLINK-27734
> URL: https://issues.apache.org/jira/browse/FLINK-27734
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.0
>Reporter: Feifan Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-05-22-23-42-46-365.png
>
>
> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> !image-2022-05-22-23-42-46-365.png|width=1019,height=362!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-table-store] ajian2002 commented on pull request #121: [FLINK-27626] Introduce AggregatuibMergeFunction

2022-05-27 Thread GitBox


ajian2002 commented on PR #121:
URL: 
https://github.com/apache/flink-table-store/pull/121#issuecomment-1139717042

   > @tsreaper 
我为每一列的功能,不同的聚合类型都扩展了列功能接口,不同的数据类型实现了不同类型的列接口功能。这种实现关系确保我们有所有的列聚合接口。
   
   Now we can specify different aggregate functions for different columns, but 
I haven't implemented any other kind than sum


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] ajian2002 commented on pull request #121: [FLINK-27626] Introduce AggregatuibMergeFunction

2022-05-27 Thread GitBox


ajian2002 commented on PR #121:
URL: 
https://github.com/apache/flink-table-store/pull/121#issuecomment-1139716861

   > @tsreaper 
我为每一列实现了列聚合函数接口,所有不同的聚合种类都扩展了列聚合函数接口,不同的数据类型实现了对应种类的列聚合函数接口。这三层关系确保我们有足够的灵活性。
   Now we can specify different aggregate functions for different columns, but 
I haven't implemented any other kind than sum
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] lindong28 commented on a diff in pull request #83: [FLINK-27170] Add Transformer and Estimator for OnlineLogisticRegression

2022-05-27 Thread GitBox


lindong28 commented on code in PR #83:
URL: https://github.com/apache/flink-ml/pull/83#discussion_r883628874


##
flink-ml-core/src/main/java/org/apache/flink/ml/common/datastream/DataStreamUtils.java:
##
@@ -182,4 +193,63 @@ public void snapshotState(StateSnapshotContext context) 
throws Exception {
 }
 }
 }
+
+/**
+ * A function that generate several data batches and distribute them to 
downstream operator.
+ *
+ * @param  Data type of batch data.
+ */
+public static  DataStream generateBatchData(

Review Comment:
   The current function name and the Java doc do not seem to capture the key 
functionality of this method, e.g. split the input data into global batches of 
`batchSize`, where each global batch is further split into 
`downStreamParallelism` local batches for downstream operators.
   
   And previous code that uses `GlobalBatchSplitter` seems a bit more readable 
than the current version, which puts everything into one method with deeper 
indentation.
   
   Could you split the preserve the classes `GlobalBatchSplitter`, 
`GlobalBatchCreator`, and updates the method name and its Java doc to more it a 
bit more self-explanatory?



##
flink-ml-core/src/test/java/org/apache/flink/ml/linalg/BLASTest.java:
##
@@ -70,7 +70,18 @@ public void testAxpyK() {
 @Test
 public void testDot() {
 DenseVector anotherDenseVec = Vectors.dense(1, 2, 3, 4, 5);
+SparseVector sparseVector1 =
+Vectors.sparse(5, new int[] {1, 2, 4}, new double[] {1., 1., 
4.});
+SparseVector sparseVector2 =
+Vectors.sparse(5, new int[] {1, 3, 4}, new double[] {1., 2., 
1.});
+// Tests Dot(dense, dense).

Review Comment:
   nits: Since the method name is `dot(...)`, would it be more intuitive to use 
`dot(dense, dense)` here?
   
   Same for the lines below.



##
flink-ml-lib/src/main/java/org/apache/flink/ml/classification/logisticregression/LogisticRegressionModel.java:
##
@@ -161,7 +162,8 @@ public Row map(Row dataPoint) {
  * @param coefficient The model parameters.
  * @return The prediction label and the raw probabilities.
  */
-private static Row predictOneDataPoint(DenseVector feature, DenseVector 
coefficient) {
+protected static Row predictOneDataPoint(Vector feature, DenseVector 
coefficient) {
+

Review Comment:
   Is this empty line necessary?



##
flink-ml-lib/src/main/java/org/apache/flink/ml/classification/logisticregression/OnlineLogisticRegressionParams.java:
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.classification.logisticregression;
+
+import org.apache.flink.ml.common.param.HasBatchStrategy;
+import org.apache.flink.ml.common.param.HasElasticNet;
+import org.apache.flink.ml.common.param.HasGlobalBatchSize;
+import org.apache.flink.ml.common.param.HasLabelCol;
+import org.apache.flink.ml.common.param.HasReg;
+import org.apache.flink.ml.common.param.HasWeightCol;
+import org.apache.flink.ml.param.DoubleParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+
+/**
+ * Params of {@link OnlineLogisticRegression}.
+ *
+ * @param  The class type of this instance.
+ */
+public interface OnlineLogisticRegressionParams
+extends HasLabelCol,
+HasWeightCol,
+HasBatchStrategy,
+HasGlobalBatchSize,
+HasReg,
+HasElasticNet,
+OnlineLogisticRegressionModelParams {
+
+Param ALPHA =
+new DoubleParam("alpha", "The parameter alpha of ftrl.", 0.1, 
ParamValidators.gt(0.0));
+
+default Double getAlpha() {
+return get(ALPHA);
+}
+
+default T setAlpha(Double value) {
+return set(ALPHA, value);
+}
+
+Param BETA =
+new DoubleParam("alpha", "The parameter beta of ftrl.", 0.1, 
ParamValidators.gt(0.0));

Review Comment:
   Hmm.. how would users know what is FTRL when they read this doc?



##
flink-ml-lib/src/main/java/org/apache/flink/ml/classification/logisticregression/LogisticRegressionModelData.java:

[jira] [Commented] (FLINK-25538) [JUnit5 Migration] Module: flink-connector-kafka

2022-05-27 Thread Anatoly Popov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542972#comment-17542972
 ] 

Anatoly Popov commented on FLINK-25538:
---

[~renqs] I would be interested to work on this one. Could you please assign it 
to me?

> [JUnit5 Migration] Module: flink-connector-kafka
> 
>
> Key: FLINK-25538
> URL: https://issues.apache.org/jira/browse/FLINK-25538
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27804) Do not observe cluster/job mid upgrade

2022-05-27 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542971#comment-17542971
 ] 

Yang Wang commented on FLINK-27804:
---

If this issue occurs every few hundred upgrades, then I am afraid we need to 
cancel the current release candidate #2. WDYT?

> Do not observe cluster/job mid upgrade
> --
>
> Key: FLINK-27804
> URL: https://issues.apache.org/jira/browse/FLINK-27804
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.0.0
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Critical
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> Seems like in some weird cornercases when we observe the FINISHED job 
> (stopped with savepoint) during an upgrade the recorded last snapshot is 
> incorrect (still need to investigate if this is due to a Flink problem or 
> what) This can lead to upgrade errors.
> This can be avoided by simply skipping the observe step when the 
> reconciliation status is UPGRADING because at that point we actually know 
> that the job was already shut down and state recorded correctly in the 
> savepoint info.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-table-store] ajian2002 commented on pull request #121: [FLINK-27626] Introduce AggregatuibMergeFunction

2022-05-27 Thread GitBox


ajian2002 commented on PR #121:
URL: 
https://github.com/apache/flink-table-store/pull/121#issuecomment-1139714404

   @tsreaper  I implement the column aggregation function interface for each 
column, all the different aggregation kinds extend the column aggregation 
function interface, and different data types implement the column aggregation 
function interface of the corresponding kind. These three layers of 
relationships ensure that we have enough flexibility.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27734) Not showing checkpoint interval properly in WebUI when checkpoint is disabled

2022-05-27 Thread EMing Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542970#comment-17542970
 ] 

EMing Zhou commented on FLINK-27734:


Hi [~Feifan Wang] 
So I think this value should not be changed, it may be caused by other reasons

> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> --
>
> Key: FLINK-27734
> URL: https://issues.apache.org/jira/browse/FLINK-27734
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.0
>Reporter: Feifan Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-05-22-23-42-46-365.png
>
>
> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> !image-2022-05-22-23-42-46-365.png|width=1019,height=362!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27805) Bump ORC version to 1.7.2

2022-05-27 Thread Dongjoon Hyun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542965#comment-17542965
 ] 

Dongjoon Hyun commented on FLINK-27805:
---

Please take over this, [~sonice_lj]. :)

> Bump ORC version to 1.7.2
> -
>
> Key: FLINK-27805
> URL: https://issues.apache.org/jira/browse/FLINK-27805
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: jia liu
>Priority: Minor
>
> The current ORC dependency version of flink is 1.5.6, but the latest ORC 
> version 1.7.x has been released for a long time.
> In order to use these new features (zstd compression, column encryption 
> etc.), we should upgrade the orc version.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Aitozi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542961#comment-17542961
 ] 

Aitozi commented on FLINK-27821:


[~Miuler] The deletion of the session cluster's flinkdeployment object will be 
postpone after the cleanup of the flinksessionjob. 

But from your log, it seems the session cluster already be shutdown when 
performing the cleanup. Do you manually delete the flink session cluster's 
Deployment ?

> Cannot delete flinkdeployment when the pod and deployment deleted manually
> --
>
> Key: FLINK-27821
> URL: https://issues.apache.org/jira/browse/FLINK-27821
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Hector Miuler Malpica Gallegos
>Priority: Major
>
> My operator was installed with following command:
>  
> {{git clone g...@github.com:apache/flink-kubernetes-operator.git}}
> {{git checkout 207b17b}}
> {{cd flink-kubernetes-operator}}
> {{helm -debug upgrade -i  flink-kubernetes-operator 
> helm/flink-kubernetes-operator --set 
> image.repository=ghcr.io/apache/flink-kubernetes-operator -set 
> image.tag=207b17b}}
>  
> Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
> deployment that generated the flinkDeployment, then reinstall operator and 
> finally I wanted to delete the flinkdeployment
>  
> kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p
>  
> {{2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
> ][flink-system/migration] Deleting FlinkDeployment}}
> {{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] 
> Force-closing a channel whose registration task was not accepted by an event 
> loop: [id: 0xb2062900]}}
> {{java.util.concurrent.RejectedExecutionException: event executor terminated}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)}}
> {{        at 
> org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)}}
> {{        at 
> org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)}}
> {{        at 
> org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)}}
> {{        at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture.postFire(Unknown Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
> {{        at java.base/java.lang.Thread.run(Unknown Source)}}
> {{2022-05-27 13:40:34,047 

[jira] [Commented] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542931#comment-17542931
 ] 

Gyula Fora commented on FLINK-27821:


cc [~aitozi] , maybe you have seen something similar

> Cannot delete flinkdeployment when the pod and deployment deleted manually
> --
>
> Key: FLINK-27821
> URL: https://issues.apache.org/jira/browse/FLINK-27821
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Hector Miuler Malpica Gallegos
>Priority: Major
>
> My operator was installed with following command:
>  
> {{git clone g...@github.com:apache/flink-kubernetes-operator.git}}
> {{git checkout 207b17b}}
> {{cd flink-kubernetes-operator}}
> {{helm -debug upgrade -i  flink-kubernetes-operator 
> helm/flink-kubernetes-operator --set 
> image.repository=ghcr.io/apache/flink-kubernetes-operator -set 
> image.tag=207b17b}}
>  
> Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
> deployment that generated the flinkDeployment, then reinstall operator and 
> finally I wanted to delete the flinkdeployment
>  
> kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p
>  
> {{2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
> ][flink-system/migration] Deleting FlinkDeployment}}
> {{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] 
> Force-closing a channel whose registration task was not accepted by an event 
> loop: [id: 0xb2062900]}}
> {{java.util.concurrent.RejectedExecutionException: event executor terminated}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)}}
> {{        at 
> org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)}}
> {{        at 
> org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)}}
> {{        at 
> org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)}}
> {{        at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture.postFire(Unknown Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
> {{        at java.base/java.lang.Thread.run(Unknown Source)}}
> {{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] 
> Failed to submit a listener notification task. Event loop shut down?}}
> {{java.util.concurrent.RejectedExecutionException: event executor terminated}}
> {{        at 
> 

[jira] [Commented] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542930#comment-17542930
 ] 

Hector Miuler Malpica Gallegos commented on FLINK-27821:


I delete the flinksessionjob and then forced the deletion of the 
flinkdeployment and at some point it could be deleted, how strange

> Cannot delete flinkdeployment when the pod and deployment deleted manually
> --
>
> Key: FLINK-27821
> URL: https://issues.apache.org/jira/browse/FLINK-27821
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Hector Miuler Malpica Gallegos
>Priority: Major
>
> My operator was installed with following command:
>  
> {{git clone g...@github.com:apache/flink-kubernetes-operator.git}}
> {{git checkout 207b17b}}
> {{cd flink-kubernetes-operator}}
> {{helm -debug upgrade -i  flink-kubernetes-operator 
> helm/flink-kubernetes-operator --set 
> image.repository=ghcr.io/apache/flink-kubernetes-operator -set 
> image.tag=207b17b}}
>  
> Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
> deployment that generated the flinkDeployment, then reinstall operator and 
> finally I wanted to delete the flinkdeployment
>  
> kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p
>  
> {{2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
> ][flink-system/migration] Deleting FlinkDeployment}}
> {{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] 
> Force-closing a channel whose registration task was not accepted by an event 
> loop: [id: 0xb2062900]}}
> {{java.util.concurrent.RejectedExecutionException: event executor terminated}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)}}
> {{        at 
> org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)}}
> {{        at 
> org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)}}
> {{        at 
> org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)}}
> {{        at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture.postFire(Unknown Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
> {{        at java.base/java.lang.Thread.run(Unknown Source)}}
> {{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] 
> Failed to submit a listener notification task. Event loop shut down?}}
> 

[jira] [Commented] (FLINK-27813) java.lang.IllegalStateException: after migration from statefun-3.1.1 to 3.2.0

2022-05-27 Thread Galen Warren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542929#comment-17542929
 ] 

Galen Warren commented on FLINK-27813:
--

Hmm, not sure. The relevant part of the error message seems to be this:
Time interval unit label 'm' does not match any of the recognized units: DAYS: 
(d | day | days), HOURS: (h | hour | hours), MINUTES: (min | minute | minutes), 
SECONDS: (s | sec | secs | second | seconds), MILLISECONDS: (ms | milli | 
millis | millisecond | milliseconds), MICROSECONDS: (µs | micro | micros | 
microsecond | microseconds), NANOSECONDS: (ns | nano | nanos | nanosecond | 
nanoseconds) 
Do you have a time unit in your spec labeled with an 'm'?

> java.lang.IllegalStateException: after migration from statefun-3.1.1 to 3.2.0
> -
>
> Key: FLINK-27813
> URL: https://issues.apache.org/jira/browse/FLINK-27813
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Affects Versions: statefun-3.2.0
>Reporter: Oleksandr
>Priority: Blocker
> Attachments: screenshot-1.png
>
>
> Issue was met after migration from 
> flink-statefun:3.1.1-java11
> to
> flink-statefun:3.2.0-java8
>  
> {code:java}
> ts.execution-paymentManualValidationRequestEgress-egress, Sink: 
> ua.aval.payments.execution-norkomExecutionRequested-egress, Sink: 
> ua.aval.payments.execution-shareStatusToManualValidation-egress) (1/1)#15863 
> (98a1f6bef9a435ef9d0292fefeb64022) switched from RUNNING to FAILED with 
> failure cause: java.lang.IllegalStateException: Unable to parse Netty 
> transport spec.\n\tat 
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.parseTransportSpec(NettyRequestReplyClientFactory.java:56)\n\tat
>  
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.createTransportClient(NettyRequestReplyClientFactory.java:39)\n\tat
>  
> org.apache.flink.statefun.flink.core.httpfn.HttpFunctionProvider.functionOfType(HttpFunctionProvider.java:49)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:70)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:47)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.load(StatefulFunctionRepository.java:71)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.get(StatefulFunctionRepository.java:59)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.LocalFunctionGroup.enqueue(LocalFunctionGroup.java:48)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.Reductions.enqueue(Reductions.java:153)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.Reductions.apply(Reductions.java:148)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.FunctionGroupOperator.processElement(FunctionGroupOperator.java:90)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)\n\tat
>  
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)\n\tat
>  
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)\n\tat
>  
> org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.sendDownstream(FeedbackUnionOperator.java:180)\n\tat
>  
> org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.processElement(FeedbackUnionOperator.java:86)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)\n\tat
>  
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)\n\tat
>  
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)\n\tat
>  
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:496)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)\n\tat
>  
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)\n\tat
>  
> 

[jira] [Updated] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27821:
---
Description: 
My operator was installed with following command:

 

{{git clone g...@github.com:apache/flink-kubernetes-operator.git}}
{{git checkout 207b17b}}
{{cd flink-kubernetes-operator}}
{{helm -debug upgrade -i  flink-kubernetes-operator 
helm/flink-kubernetes-operator --set 
image.repository=ghcr.io/apache/flink-kubernetes-operator -set 
image.tag=207b17b}}

 

Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
deployment that generated the flinkDeployment, then reinstall operator and 
finally I wanted to delete the flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

 

{{2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing 
a channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)}}
{{        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)}}
{{        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed 
to submit a listener notification task. Event loop shut down?}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
{{        at 

[jira] [Updated] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27821:
---
Description: 
My operator was installed with following command:

 

{{git clone g...@github.com:apache/flink-kubernetes-operator.git}}

{{git checkout 207b17b}}

{{cd flink-kubernetes-operator}}

{{helm -debug upgrade -i  flink-kubernetes-operator 
helm/flink-kubernetes-operator --set 
image.repository=ghcr.io/apache/flink-kubernetes-operator -set 
image.tag=207b17b}}

 

Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
deployment that generated the flinkDeployment, then reinstall operator and 
finally I wanted to delete the flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

 

{{2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing 
a channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)}}
{{        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)}}
{{        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed 
to submit a listener notification task. Event loop shut down?}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
{{        at 

[jira] [Updated] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27821:
---
Description: 
My operator was installed with following command:

 

{{git clone g...@github.com:apache/flink-kubernetes-operator.git }}

{{git checkout 207b17b}}

{{cd flink-kubernetes-operator}}

helm --debug upgrade -i  flink-kubernetes-operator 
helm/flink-kubernetes-operator --set 
image.repository=ghcr.io/apache/flink-kubernetes-operator {{--set 
image.tag=207b17b}}

 

Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
deployment that generated the flinkDeployment, then reinstall operator and 
finally I wanted to delete the flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

 

{{2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing 
a channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)}}
{{        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)}}
{{        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed 
to submit a listener notification task. Event loop shut down?}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
{{        at 

[jira] [Updated] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27821:
---
Description: 
My operator was installed with following command:

 

{{{}git clone g...@github.com:apache/flink-kubernetes-operator.git git checkout 
207b17b{}}}{{{}cd flink-kubernetes-operator  {}}}{{helm --debug upgrade -i \}}
{{           flink-kubernetes-operator helm/flink-kubernetes-operator \}}
{{           --set image.repository=ghcr.io/apache/flink-kubernetes-operator \}}
{{           --set image.tag=207b17b}}

 

Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
deployment that generated the flinkDeployment, then reinstall operator and 
finally I wanted to delete the flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

 

{{2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing 
a channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)}}
{{        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)}}
{{        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed 
to submit a listener notification task. Event loop shut down?}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 

[jira] [Updated] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27821:
---
Description: 
My operator was installed with following command:

```

git clone g...@github.com:apache/flink-kubernetes-operator.git git checkout 
207b17b

cd flink-kubernetes-operator  

helm --debug upgrade -i \
           flink-kubernetes-operator helm/flink-kubernetes-operator \
           --set image.repository=ghcr.io/apache/flink-kubernetes-operator \
           --set image.tag=207b17b

```

Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
deployment that generated the flinkDeployment, then reinstall operator and 
finally I wanted to delete the flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

```

2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment
2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing a 
channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]
java.util.concurrent.RejectedExecutionException: event executor terminated
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)
        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)
        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)
        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)
        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)
        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source)
        at java.base/java.lang.Thread.run(Unknown Source)
2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed to 
submit a listener notification task. Event loop shut down?
java.util.concurrent.RejectedExecutionException: event executor terminated
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:841)
        at 

[jira] [Created] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)
Hector Miuler Malpica Gallegos created FLINK-27821:
--

 Summary: Cannot delete flinkdeployment when the pod and deployment 
deleted manually
 Key: FLINK-27821
 URL: https://issues.apache.org/jira/browse/FLINK-27821
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Affects Versions: kubernetes-operator-1.1.0
Reporter: Hector Miuler Malpica Gallegos


My operator was installed with following command:

```

git clone g...@github.com:apache/flink-kubernetes-operator.git git checkout 
207b17b

cd flink-kubernetes-operator  

helm --debug upgrade -i \
           flink-kubernetes-operator helm/flink-kubernetes-operator \
           --set image.repository=ghcr.io/apache/flink-kubernetes-operator \
           --set image.tag=207b17b

```

Then I create a flinkDeployment and flinkSessionJob, then I delete the 
deployment of the flinkDeployment, and finally I wanted to delete the 
flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

 

```

2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment
2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing a 
channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]
java.util.concurrent.RejectedExecutionException: event executor terminated
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)
        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)
        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)
        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)
        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)
        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source)
        at java.base/java.lang.Thread.run(Unknown Source)
2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed to 
submit a listener notification task. Event loop shut down?
java.util.concurrent.RejectedExecutionException: event executor terminated
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)
        at 

[GitHub] [flink] flinkbot commented on pull request #19836: [FLINK-27819][rest][docs] Use proper operationIds

2022-05-27 Thread GitBox


flinkbot commented on PR #19836:
URL: https://github.com/apache/flink/pull/19836#issuecomment-1139623281

   
   ## CI report:
   
   * 031ebf31dd920a905ed1e506bd8cc5112f3fb9db UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-27812) Support Dynamic Change of Watched Namespaces

2022-05-27 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-27812:
--

Assignee: Matyas Orhidi

> Support Dynamic Change of Watched Namespaces
> 
>
> Key: FLINK-27812
> URL: https://issues.apache.org/jira/browse/FLINK-27812
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Matyas Orhidi
>Assignee: Matyas Orhidi
>Priority: Major
> Fix For: kubernetes-operator-1.1.0
>
>
> The new feature [Dynamically Changing Target Namespaces 
> |https://javaoperatorsdk.io/docs/features#dynamically-changing-target-namespaces]
>  introduced in JOSDK v3 enables to listen to namespace changes without 
> restarting the operator. The watched namespaces are currently determined by 
> environment variables, and can be moved into the ConfigMap next to other 
> operator config parameters that are already handled dynamically.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27819) Generate better operationIds for OpenAPI spec

2022-05-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27819:
---
Labels: pull-request-available  (was: )

> Generate better operationIds for OpenAPI spec
> -
>
> Key: FLINK-27819
> URL: https://issues.apache.org/jira/browse/FLINK-27819
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / REST
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> There is an easy way to generate operation ids that are significantly better 
> than the defaults.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] zentol opened a new pull request, #19836: [FLINK-27819][rest][docs] Use proper operationIds

2022-05-27 Thread GitBox


zentol opened a new pull request, #19836:
URL: https://github.com/apache/flink/pull/19836

   Adds some simple generation logic and covers a few edge-cases to have proper 
operationIds in the OpenAPI spec. These are used for the generated method names.
   
   The naming pattern is similar to what we'd use in Java.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27802) Savepoint restore errors are swallowed for Flink 1.15

2022-05-27 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542910#comment-17542910
 ] 

Gyula Fora commented on FLINK-27802:


that's true, but people generally look at the logs of the last attempt first 
which at this point will have no information.

> Savepoint restore errors are swallowed for Flink 1.15
> -
>
> Key: FLINK-27802
> URL: https://issues.apache.org/jira/browse/FLINK-27802
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.0.0
>Reporter: Gyula Fora
>Priority: Critical
>
> When a job is submitted with an incorrect savepoint path the error is 
> swallowed by Flink due to the result store:
> 2022-05-26 12:34:43,497 WARN 
> org.apache.flink.runtime.dispatcher.StandaloneDispatcher [] - Ignoring 
> JobGraph submission 'State machine job' () 
> because the job already reached a globally-terminal state (i.e. FAILED, 
> CANCELED, FINISHED) in a previous execution.
> 2022-05-26 12:34:43,552 INFO 
> org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap 
> [] - Application completed SUCCESSFULLY
> The easiest way to reproduce this is to create a new deployment and set 
> initialSavepointPath to a random missing path.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27820) Handle Upgrade/Deployment errors gracefully

2022-05-27 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-27820:
--

 Summary: Handle Upgrade/Deployment errors gracefully
 Key: FLINK-27820
 URL: https://issues.apache.org/jira/browse/FLINK-27820
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Affects Versions: kubernetes-operator-1.0.0
Reporter: Gyula Fora
Assignee: Gyula Fora
 Fix For: kubernetes-operator-1.1.0


The operator currently cannot gracefully handle the cases when there is a 
failure during (or directly after & and before updating the status) job 
submission.

This applies to both initial cluster submissions when a Flink CR was created 
but more importantly during upgrades.

This is slightly related to https://issues.apache.org/jira/browse/FLINK-27804 
where mid-upgrade observe was disabled to workaround some issues, this logic 
should also be improved to only skip observing last-state info for already 
finished jobs (that were observed before).

During upgrades, the observer should be able to recognize when the job/cluster 
was actually submitted already even if the status update subsequently failed 
and move the status into a healthy DEPLOYED state.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27775) FlinkKafkaProducer VS KafkaSink

2022-05-27 Thread Jiangfei Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangfei Liu updated FLINK-27775:
-
Issue Type: Bug  (was: Improvement)

> FlinkKafkaProducer VS KafkaSink
> ---
>
> Key: FLINK-27775
> URL: https://issues.apache.org/jira/browse/FLINK-27775
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.14.3
>Reporter: Jiangfei Liu
>Priority: Major
>  Labels: features
> Attachments: Snipaste_2022-05-25_19-52-11.png
>
>
> sorry,my english is bad
> in Flink1.14.3,write 1 data to kafka
> when use FlinkKafkaProducer,comleted  7s
> when use KafkaSink,comleted 1m40s
> why KafkaSink is low speed



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27804) Do not observe cluster/job mid upgrade

2022-05-27 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-27804:
---
Fix Version/s: kubernetes-operator-1.0.0

> Do not observe cluster/job mid upgrade
> --
>
> Key: FLINK-27804
> URL: https://issues.apache.org/jira/browse/FLINK-27804
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.0.0
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Critical
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> Seems like in some weird cornercases when we observe the FINISHED job 
> (stopped with savepoint) during an upgrade the recorded last snapshot is 
> incorrect (still need to investigate if this is due to a Flink problem or 
> what) This can lead to upgrade errors.
> This can be avoided by simply skipping the observe step when the 
> reconciliation status is UPGRADING because at that point we actually know 
> that the job was already shut down and state recorded correctly in the 
> savepoint info.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Closed] (FLINK-27804) Do not observe cluster/job mid upgrade

2022-05-27 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-27804.
--
Resolution: Fixed

Merged

main: aa4f1d64d223d2dfa434edcd4c2ae8a9b54d0fdf
release-1.0: fbaad0f48cb5bf3a4ca2b685846dab9c072083e0

> Do not observe cluster/job mid upgrade
> --
>
> Key: FLINK-27804
> URL: https://issues.apache.org/jira/browse/FLINK-27804
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.0.0
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Critical
>  Labels: pull-request-available
>
> Seems like in some weird cornercases when we observe the FINISHED job 
> (stopped with savepoint) during an upgrade the recorded last snapshot is 
> incorrect (still need to investigate if this is due to a Flink problem or 
> what) This can lead to upgrade errors.
> This can be avoided by simply skipping the observe step when the 
> reconciliation status is UPGRADING because at that point we actually know 
> that the job was already shut down and state recorded correctly in the 
> savepoint info.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-27734) Not showing checkpoint interval properly in WebUI when checkpoint is disabled

2022-05-27 Thread EMing Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542839#comment-17542839
 ] 

EMing Zhou edited comment on FLINK-27734 at 5/27/22 12:55 PM:
--

Hi [~Feifan Wang] 

     I took a look at the pr you submitted, why did you change 
'0x7' to 9223372036854776000, 

     I tested it and it should be 9223372036854775807, and then the page will 
display 'Periodic checkpoints disabled', 0x7 This seems to be 
no problem

 


was (Author: zsigner):
Hi [~Feifan Wang] 

     I took a look at the pr you submitted, why did you change 
'0x7' to 9223372036854776000, the original corresponding 
0x7 is a hexadecimal, and it is exactly 147573952589676412927 
when converted to decimal

> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> --
>
> Key: FLINK-27734
> URL: https://issues.apache.org/jira/browse/FLINK-27734
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.0
>Reporter: Feifan Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-05-22-23-42-46-365.png
>
>
> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> !image-2022-05-22-23-42-46-365.png|width=1019,height=362!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27804) Do not observe cluster/job mid upgrade

2022-05-27 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542902#comment-17542902
 ] 

Gyula Fora commented on FLINK-27804:


Yes I completely agree that it should work, but we once out of every few 
hundred upgrades we hit these weird cases that I suspect might be caused by 
something like this. So with this change we can at least eliminate this root 
cause and see if it occurs again

> Do not observe cluster/job mid upgrade
> --
>
> Key: FLINK-27804
> URL: https://issues.apache.org/jira/browse/FLINK-27804
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.0.0
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Critical
>  Labels: pull-request-available
>
> Seems like in some weird cornercases when we observe the FINISHED job 
> (stopped with savepoint) during an upgrade the recorded last snapshot is 
> incorrect (still need to investigate if this is due to a Flink problem or 
> what) This can lead to upgrade errors.
> This can be avoided by simply skipping the observe step when the 
> reconciliation status is UPGRADING because at that point we actually know 
> that the job was already shut down and state recorded correctly in the 
> savepoint info.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-20681) Support specifying the hdfs path when ship archives or files

2022-05-27 Thread ming li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542901#comment-17542901
 ] 

ming li commented on FLINK-20681:
-

hi, [~RocMarshal], what is the current progress of this issue? We also want to 
use this feature to add some third-party packages to the job.:)

> Support specifying the hdfs path  when ship archives or files
> -
>
> Key: FLINK-20681
> URL: https://issues.apache.org/jira/browse/FLINK-20681
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Ruguo Yu
>Assignee: RocMarshal
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available, pull-requests-available, stale-assigned
> Attachments: image-2020-12-23-20-58-41-234.png, 
> image-2020-12-24-01-01-10-021.png
>
>
> Currently, our team try to submit flink job that depends extra resource with 
> yarn-application target, and use two options: "yarn.ship-archives" and 
> "yarn.ship-files".
> But above options only support specifying local resource and shiping them to 
> hdfs, besides if it can support remote resource on distributed filesystem 
> (such as hdfs), then get the following benefits:
>  * client will exclude the local resource uploading to accelerate the job 
> submission process
>  * yarn will cache them on the nodes so that they doesn't need to be 
> downloaded for application



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-kubernetes-operator] gyfora merged pull request #245: [FLINK-27804] Do not observe cluster/job mid upgrade

2022-05-27 Thread GitBox


gyfora merged PR #245:
URL: https://github.com/apache/flink-kubernetes-operator/pull/245


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on pull request #245: [FLINK-27804] Do not observe cluster/job mid upgrade

2022-05-27 Thread GitBox


gyfora commented on PR #245:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/245#issuecomment-1139590007

   Merging this for now, there is definitely follow up work to be done in the 
future to handle cases when the actual deployment step fails during an upgrade 
which is generally not handled at the moment


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #19835: [FLINK-27818][rest][docs] Model enums as references

2022-05-27 Thread GitBox


flinkbot commented on PR #19835:
URL: https://github.com/apache/flink/pull/19835#issuecomment-1139584486

   
   ## CI report:
   
   * 44ca2a00ec5913a38cea4c218bc0e148cf8c762a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27819) Generate better operationIds for OpenAPI spec

2022-05-27 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-27819:


 Summary: Generate better operationIds for OpenAPI spec
 Key: FLINK-27819
 URL: https://issues.apache.org/jira/browse/FLINK-27819
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Runtime / REST
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.16.0


There is an easy way to generate operation ids that are significantly better 
than the defaults.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27818) Model enums as references in OpenAPI spec

2022-05-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27818:
---
Labels: pull-request-available  (was: )

> Model enums as references in OpenAPI spec
> -
>
> Key: FLINK-27818
> URL: https://issues.apache.org/jira/browse/FLINK-27818
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / REST
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] zentol opened a new pull request, #19835: [FLINK-27818][rest][docs] Model enums as references

2022-05-27 Thread GitBox


zentol opened a new pull request, #19835:
URL: https://github.com/apache/flink/pull/19835

   Make sure the spec uses enum references for all endpoints, as otherwise a 
separate class is generated for each of them.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27818) Model enums as references in OpenAPI spec

2022-05-27 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-27818:


 Summary: Model enums as references in OpenAPI spec
 Key: FLINK-27818
 URL: https://issues.apache.org/jira/browse/FLINK-27818
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Runtime / REST
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.16.0






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-25197) Using Statefun RequestReplyFunctionBuilder fails with Java 8 date/time type `java.time.Duration` not supported by default: add Module "org.apache.flink.shaded.jackson2

2022-05-27 Thread Oleksandr (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542896#comment-17542896
 ] 

Oleksandr commented on FLINK-25197:
---

Hi, [~galenwarren] could your fix affect this issue - 
https://issues.apache.org/jira/browse/FLINK-27813 ?

> Using Statefun RequestReplyFunctionBuilder fails with Java 8 date/time type 
> `java.time.Duration` not supported by default: add Module 
> "org.apache.flink.shaded.jackson2.com.fasterxml.jackson.datatype:jackson-datatype-jsr310"
>  to enable handling 
> ---
>
> Key: FLINK-25197
> URL: https://issues.apache.org/jira/browse/FLINK-25197
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Affects Versions: statefun-3.1.0
>Reporter: Galen Warren
>Assignee: Galen Warren
>Priority: Major
>  Labels: pull-request-available
> Fix For: statefun-3.2.0, statefun-3.1.2
>
>
> When using RequestReplyFunctionBuilder to build a stateful functions job, the 
> job fails at runtime with:
> Java 8 date/time type `java.time.Duration` not supported by default: add 
> Module 
> "org.apache.flink.shaded.jackson2.com.fasterxml.jackson.datatype:jackson-datatype-jsr310"
>  to enable handling 
> It appears this is because, in 
> [RequestReplyFunctionBuilder::transportClientPropertiesAsObjectNode|https://github.com/apache/flink-statefun/blob/b4ba9547b8f0105a28544fd28a5e0433666e9023/statefun-flink/statefun-flink-datastream/src/main/java/org/apache/flink/statefun/flink/datastream/RequestReplyFunctionBuilder.java#L127],
>  a default instance of ObjectMapper is used to serialize the client 
> properties, which now include a java.time.Duration. There is a 
> [StateFunObjectMapper|https://github.com/apache/flink-statefun/blob/master/statefun-flink/statefun-flink-common/src/main/java/org/apache/flink/statefun/flink/common/json/StateFunObjectMapper.java]
>  class in the project that has customized serde support, but it is not used 
> here.
> The fix seems to be to:
>  * Use an instance of StateFunObjectMapper to serialize the client properties 
> in RequestReplyFunctionBuilder
>  * Modify StateFunObjectMapper to both serialize and deserialize instances of 
> java.time.Duration (currently, only deserialization is supported)
> I've made these changes locally and it seems to fix the problem. Would you be 
> interested in a PR? Thanks.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-kubernetes-operator] morhidi commented on pull request #239: [FLINK-27714] Migrate to java-operator-sdk v3

2022-05-27 Thread GitBox


morhidi commented on PR #239:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/239#issuecomment-1139569167

   None of the aforementioned issues are blockers, and the retry mechanism in 
JOSDK solves them under the hood, but we can wait for another patch/minor 
release before merging to be completely on the safe side. According to @csviri 
a new version of JOSDK containing these fixes is expected to be released within 
1-2 days.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-27734) Not showing checkpoint interval properly in WebUI when checkpoint is disabled

2022-05-27 Thread EMing Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542839#comment-17542839
 ] 

EMing Zhou edited comment on FLINK-27734 at 5/27/22 12:12 PM:
--

Hi [~Feifan Wang] 

     I took a look at the pr you submitted, why did you change 
'0x7' to 9223372036854776000, the original corresponding 
0x7 is a hexadecimal, and it is exactly 147573952589676412927 
when converted to decimal


was (Author: zsigner):
Hi [~Feifan Wang] 

     I took a look at the pr you submitted, why did you change 
'0x7' to 9223372036854776000, the original corresponding 
0x7 is a hexadecimal, and it is exactly 0 when converted to 
decimal

> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> --
>
> Key: FLINK-27734
> URL: https://issues.apache.org/jira/browse/FLINK-27734
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.0
>Reporter: Feifan Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-05-22-23-42-46-365.png
>
>
> Not showing checkpoint interval properly  in WebUI when checkpoint is disabled
> !image-2022-05-22-23-42-46-365.png|width=1019,height=362!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] flinkbot commented on pull request #19834: [FLINK-27797][API/Python] PythonTableUtils.getCollectionInputFormat c…

2022-05-27 Thread GitBox


flinkbot commented on PR #19834:
URL: https://github.com/apache/flink/pull/19834#issuecomment-1139545614

   
   ## CI report:
   
   * 50e1d2ecad8b4193b5ddc70be829ad08e01132e6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27797) PythonTableUtils.getCollectionInputFormat cannot correctly handle None values

2022-05-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27797:
---
Labels: pull-request-available  (was: )

> PythonTableUtils.getCollectionInputFormat cannot correctly handle None values
> -
>
> Key: FLINK-27797
> URL: https://issues.apache.org/jira/browse/FLINK-27797
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0
>Reporter: Yunfeng Zhou
>Priority: Major
>  Labels: pull-request-available
>
> In `PythonTableUtils.getCollectionInputFormat` there are implementations like 
> follows.
> This code can be found at 
> [https://github.com/apache/flink/blob/8488368b86a99a064446ca74e775b670b94a/flink-python/src/main/java/org/apache/flink/table/utils/python/PythonTableUtils.java#L515]
> ```
> c -> {
> if (c.getClass() != byte[].class || dataType instanceof 
> PickledByteArrayTypeInfo) {
> return c;
> }
> ```
> Here, the generated function did not check `c != null` before doing 
> `c.getClass()`. which might cause that tables created through pyflink cannot 
> parse it when values are `None`.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] BiGsuw opened a new pull request, #19834: [FLINK-27797][API/Python] PythonTableUtils.getCollectionInputFormat c…

2022-05-27 Thread GitBox


BiGsuw opened a new pull request, #19834:
URL: https://github.com/apache/flink/pull/19834

   What is the purpose of the change
   fix PythonTableUtils.getCollectionInputFormat cannot correctly handle None 
values
, which is described in FLINK-27797
   
   Does this pull request potentially affect one of the following parts:
   Dependencies (does it add or upgrade a dependency): (no)
   The public API, i.e., is any changed class annotated with @Public(Evolving): 
(no)
   The serializers: (no)
   The runtime per-record code paths (performance sensitive): (no)
   Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
   The S3 file system connector: (no)
   Documentation
   Does this pull request introduce a new feature? (no)
   If yes, how is the feature documented? (no)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] xuyangzhong commented on a diff in pull request #19797: [FLINK-27741][table-planner] Fix NPE when use dense_rank() and rank()…

2022-05-27 Thread GitBox


xuyangzhong commented on code in PR #19797:
URL: https://github.com/apache/flink/pull/19797#discussion_r883538277


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/OverAggregateITCase.scala:
##
@@ -159,6 +159,66 @@ class OverAggregateITCase(mode: StateBackendMode) extends 
StreamingWithStateTest
 assertEquals(expected.sorted, sink.getAppendResults.sorted)
   }
 
+  @Test
+  def testDenseRankOnOver(): Unit = {

Review Comment:
   Maybe you can add some test cases for testing plan, not only just for 
ITCases.



##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/aggfunctions/RankLikeAggFunctionBase.java:
##
@@ -98,7 +98,7 @@ protected Expression orderKeyEqualsExpression() {
 equalTo(lasValue, operand(i)));
 }
 Optional ret = 
Arrays.stream(orderKeyEquals).reduce(ExpressionBuilder::and);
-return ret.orElseGet(() -> literal(true));
+return ret.orElseGet(() -> literal(false));

Review Comment:
   Hi, I just wander why this value should be changed ? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27797) PythonTableUtils.getCollectionInputFormat cannot correctly handle None values

2022-05-27 Thread EMing Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542882#comment-17542882
 ] 

EMing Zhou commented on FLINK-27797:


can I get this ticket?

> PythonTableUtils.getCollectionInputFormat cannot correctly handle None values
> -
>
> Key: FLINK-27797
> URL: https://issues.apache.org/jira/browse/FLINK-27797
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0
>Reporter: Yunfeng Zhou
>Priority: Major
>
> In `PythonTableUtils.getCollectionInputFormat` there are implementations like 
> follows.
> This code can be found at 
> [https://github.com/apache/flink/blob/8488368b86a99a064446ca74e775b670b94a/flink-python/src/main/java/org/apache/flink/table/utils/python/PythonTableUtils.java#L515]
> ```
> c -> {
> if (c.getClass() != byte[].class || dataType instanceof 
> PickledByteArrayTypeInfo) {
> return c;
> }
> ```
> Here, the generated function did not check `c != null` before doing 
> `c.getClass()`. which might cause that tables created through pyflink cannot 
> parse it when values are `None`.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27722) Slack: set up auto-updated invitation link

2022-05-27 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542878#comment-17542878
 ] 

Jark Wu commented on FLINK-27722:
-

Not sure how others doing this, like Airbyte and Materialize. 
It seems Airbyte also using a shorturl and redirect to the shared link. 

> Slack: set up auto-updated invitation link
> --
>
> Key: FLINK-27722
> URL: https://issues.apache.org/jira/browse/FLINK-27722
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Xintong Song
>Assignee: Martijn Visser
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27813) java.lang.IllegalStateException: after migration from statefun-3.1.1 to 3.2.0

2022-05-27 Thread Oleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr updated FLINK-27813:
--
Description: 
Issue was met after migration from 

flink-statefun:3.1.1-java11

to

flink-statefun:3.2.0-java8

 
{code:java}
ts.execution-paymentManualValidationRequestEgress-egress, Sink: 
ua.aval.payments.execution-norkomExecutionRequested-egress, Sink: 
ua.aval.payments.execution-shareStatusToManualValidation-egress) (1/1)#15863 
(98a1f6bef9a435ef9d0292fefeb64022) switched from RUNNING to FAILED with failure 
cause: java.lang.IllegalStateException: Unable to parse Netty transport 
spec.\n\tat 
org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.parseTransportSpec(NettyRequestReplyClientFactory.java:56)\n\tat
 
org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.createTransportClient(NettyRequestReplyClientFactory.java:39)\n\tat
 
org.apache.flink.statefun.flink.core.httpfn.HttpFunctionProvider.functionOfType(HttpFunctionProvider.java:49)\n\tat
 
org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:70)\n\tat
 
org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:47)\n\tat
 
org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.load(StatefulFunctionRepository.java:71)\n\tat
 
org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.get(StatefulFunctionRepository.java:59)\n\tat
 
org.apache.flink.statefun.flink.core.functions.LocalFunctionGroup.enqueue(LocalFunctionGroup.java:48)\n\tat
 
org.apache.flink.statefun.flink.core.functions.Reductions.enqueue(Reductions.java:153)\n\tat
 
org.apache.flink.statefun.flink.core.functions.Reductions.apply(Reductions.java:148)\n\tat
 
org.apache.flink.statefun.flink.core.functions.FunctionGroupOperator.processElement(FunctionGroupOperator.java:90)\n\tat
 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)\n\tat
 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)\n\tat
 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)\n\tat
 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)\n\tat
 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)\n\tat
 
org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.sendDownstream(FeedbackUnionOperator.java:180)\n\tat
 
org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.processElement(FeedbackUnionOperator.java:86)\n\tat
 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)\n\tat
 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)\n\tat
 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)\n\tat
 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)\n\tat
 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:496)\n\tat
 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)\n\tat
 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)\n\tat
 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)\n\tat
 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)\n\tat
 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937)\n\tat 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)\n\tat 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)\n\tat 
java.base/java.lang.Thread.run(Unknown Source)\nCaused by: 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
 Time interval unit label 'm' does not match any of the recognized units: DAYS: 
(d | day | days), HOURS: (h | hour | hours), MINUTES: (min | minute | minutes), 
SECONDS: (s | sec | secs | second | seconds), MILLISECONDS: (ms | milli | 
millis | millisecond | milliseconds), MICROSECONDS: (µs | micro | micros | 
microsecond | microseconds), NANOSECONDS: (ns | nano | nanos | nanosecond | 
nanoseconds) (through reference chain: 
org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplySpec[\"timeouts\"]->org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplySpec$Timeouts[\"call\"])\n\tat
 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:390)\n\tat
 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:349)\n\tat
 

[GitHub] [flink] gaborgsomogyi commented on pull request #19825: [FLINK-27171][runtime][security] Add periodic kerberos delegation token obtain possibility to DelegationTokenManager

2022-05-27 Thread GitBox


gaborgsomogyi commented on PR #19825:
URL: https://github.com/apache/flink/pull/19825#issuecomment-1139506508

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] dotbg commented on pull request #95: Migrated to flink 1.15.0.

2022-05-27 Thread GitBox


dotbg commented on PR #95:
URL: https://github.com/apache/flink-ml/pull/95#issuecomment-1139500172

   thanks a lot @yunfengzhou-hub, @lindong28 . Sorry did not have time 
recently. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27813) java.lang.IllegalStateException: after migration from statefun-3.1.1 to 3.2.0

2022-05-27 Thread Oleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr updated FLINK-27813:
--
Attachment: screenshot-1.png

> java.lang.IllegalStateException: after migration from statefun-3.1.1 to 3.2.0
> -
>
> Key: FLINK-27813
> URL: https://issues.apache.org/jira/browse/FLINK-27813
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Affects Versions: statefun-3.2.0
>Reporter: Oleksandr
>Priority: Blocker
> Attachments: screenshot-1.png
>
>
> Issue was met after migration to 3.2.0 
> {code:java}
> ts.execution-paymentManualValidationRequestEgress-egress, Sink: 
> ua.aval.payments.execution-norkomExecutionRequested-egress, Sink: 
> ua.aval.payments.execution-shareStatusToManualValidation-egress) (1/1)#15863 
> (98a1f6bef9a435ef9d0292fefeb64022) switched from RUNNING to FAILED with 
> failure cause: java.lang.IllegalStateException: Unable to parse Netty 
> transport spec.\n\tat 
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.parseTransportSpec(NettyRequestReplyClientFactory.java:56)\n\tat
>  
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.createTransportClient(NettyRequestReplyClientFactory.java:39)\n\tat
>  
> org.apache.flink.statefun.flink.core.httpfn.HttpFunctionProvider.functionOfType(HttpFunctionProvider.java:49)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:70)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:47)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.load(StatefulFunctionRepository.java:71)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.get(StatefulFunctionRepository.java:59)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.LocalFunctionGroup.enqueue(LocalFunctionGroup.java:48)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.Reductions.enqueue(Reductions.java:153)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.Reductions.apply(Reductions.java:148)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.FunctionGroupOperator.processElement(FunctionGroupOperator.java:90)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)\n\tat
>  
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)\n\tat
>  
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)\n\tat
>  
> org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.sendDownstream(FeedbackUnionOperator.java:180)\n\tat
>  
> org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.processElement(FeedbackUnionOperator.java:86)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)\n\tat
>  
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)\n\tat
>  
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)\n\tat
>  
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:496)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)\n\tat
>  
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)\n\tat
>  
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937)\n\tat
>  org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)\n\tat 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)\n\tat 
> java.base/java.lang.Thread.run(Unknown Source)\nCaused by: 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
>  Time interval unit label 'm' does not match any of the recognized units: 
> DAYS: (d | day | days), HOURS: (h | hour | hours), MINUTES: (min | minute | 
> minutes), SECONDS: (s | sec | secs | second | seconds), MILLISECONDS: (ms | 
> milli | millis | millisecond | 

[jira] [Created] (FLINK-27817) TaskManager metaspace OOM for session cluster

2022-05-27 Thread godfrey he (Jira)
godfrey he created FLINK-27817:
--

 Summary: TaskManager metaspace OOM for session cluster
 Key: FLINK-27817
 URL: https://issues.apache.org/jira/browse/FLINK-27817
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / Task
Reporter: godfrey he


>From user ML: 
>https://www.mail-archive.com/user-zh@flink.apache.org/msg15224.html

For SQL jobs, the most operators are code generated with *unique class name*, 
this will cause the TM metaspace space continued growth until OOM in a session 
cluster.




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] luoyuxia commented on pull request #19713: [FLINK-27597][hive] HiveCatalog support to get statistic for partitioned table

2022-05-27 Thread GitBox


luoyuxia commented on PR #19713:
URL: https://github.com/apache/flink/pull/19713#issuecomment-1139493879

   > 
   @swuferhong Thanks for you reiview. But for alter hive partitioned table‘s 
statistics, it's expected to use `HiveCatalog#alterPartitionColumnStatistics`.
   For Hive, if you want to alter hive partitioned table‘s statistics, you will 
always to specific a partition.
   BTW, Hive's `analyze` command is also in partition granularity if it's for 
partitioned table.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27816) file source reader has Some requirements

2022-05-27 Thread jinshuangxian (Jira)
jinshuangxian created FLINK-27816:
-

 Summary: file source reader has Some requirements
 Key: FLINK-27816
 URL: https://issues.apache.org/jira/browse/FLINK-27816
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.15.0
Reporter: jinshuangxian


I use the flink sql file-system connector to consume data, write it to object 
storage, and use the file-system consumption to process the data in the object 
storage. The whole process works fine. I have 2 new requirements:
1. can I specify a timestamp to consume files within a specified time range
2. It is hoped that the data written to the object storage can be ordered in 
the partition (for example, partitioned according to the device id), and the 
file source reader can consume the files in an orderly manner similar to kafka 
when consuming files.
Can some enhancements be made to the file source reader?



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-table-store] tsreaper commented on a diff in pull request #121: [FLINK-27626] Introduce AggregatuibMergeFunction

2022-05-27 Thread GitBox


tsreaper commented on code in PR #121:
URL: https://github.com/apache/flink-table-store/pull/121#discussion_r883447651


##
docs/content/docs/development/create-table.md:
##
@@ -268,3 +268,39 @@ For example, the inputs:
 
 Output: 
 - <1, 25.2, 20, 'This is a book'>
+
+## Aggregation Update
+
+You can configure partial update from options:
+
+```sql
+CREATE TABLE MyTable (
+  a STRING,
+  b INT,
+  c INT,
+  PRIMARY KEY (a) NOT ENFORCED 
+) WITH (
+  'merge-engine'='aggregation'

Review Comment:
   We also need to specify what aggregate functions are used for each field.



##
flink-table-store-core/src/main/java/org/apache/flink/table/store/file/mergetree/compact/SumAggregateFunction.java:
##
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.file.mergetree.compact;
+
+import java.io.Serializable;
+
+/** Custom column aggregation abstract class. */
+public interface SumAggregateFunction extends Serializable {

Review Comment:
   This interface can also be used by other aggregate functions, not only sum. 
Make this a more generic aggregate function interface.



##
flink-table-store-core/src/main/java/org/apache/flink/table/store/file/mergetree/compact/AggregateFunctionFactory.java:
##
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.file.mergetree.compact;
+
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.Set;
+
+/**
+ * Aggregate Function Factory is used to get the aggregate type based on the 
configuration Each
+ * aggregate type has its own aggregate function factory Different 
implementation classes are given
+ * for different data types.
+ */
+public class AggregateFunctionFactory {
+
+/** SumFactory. */
+public static class SumAggregateFunctionFactory {
+static SumAggregateFunction choiceRightAggregateFunction(Class 
c) {

Review Comment:
   Use `LogicalType` instead of `Class`. See `TypeUtils` class in 
`flink-table-store-common` module for an example.



##
flink-table-store-core/src/main/java/org/apache/flink/table/store/file/mergetree/compact/AggregateFunctionFactory.java:
##
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.file.mergetree.compact;
+
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.Set;
+
+/**
+ * Aggregate Function Factory is used to get the aggregate type based on the 
configuration Each
+ * aggregate type has its own aggregate function factory Different 
implementation classes are given
+ * for 

[GitHub] [flink] flinkbot commented on pull request #19833: [FLINK-27778][table][planner] Use correct 'newName()' method

2022-05-27 Thread GitBox


flinkbot commented on PR #19833:
URL: https://github.com/apache/flink/pull/19833#issuecomment-1139460307

   
   ## CI report:
   
   * 001e34182005a7d21edb3e3265b7298b4ed24007 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27778) table-planner should not use CodeSplitUtils#newName

2022-05-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27778:
---
Labels: pull-request-available  (was: )

> table-planner should not use CodeSplitUtils#newName
> ---
>
> Key: FLINK-27778
> URL: https://issues.apache.org/jira/browse/FLINK-27778
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System, Table SQL / Planner
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> The {{table-planner}} has a direct dependency on the {{table-code-splitter}}, 
> as several CastRules use {{CodeSplitUtil.newName}}.
> This dependency is a bit hidden. In the IDE it is pulled in transitively via 
> {{table-runtime}}, and in maven it uses the {{table-code-splitter}} 
> dependency bundled by {{table-runtime}}.
> It would be nice if we could add a {{provided}} dependency to the 
> {{table-code-splitter}} to properly document this.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] zentol opened a new pull request, #19833: [FLINK-27778][table][planner] Use correct 'newName()' method

2022-05-27 Thread GitBox


zentol opened a new pull request, #19833:
URL: https://github.com/apache/flink/pull/19833

   Fix an issue where the planner was using a method from the wrong class.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27778) table-planner should not use CodeSplitUtils#newName

2022-05-27 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542856#comment-17542856
 ] 

Chesnay Schepler commented on FLINK-27778:
--

Reached out to [~tiwalter]; the planner is using the wrong method; it should 
use {{CodeGenUtils#newName}} instead.

> table-planner should not use CodeSplitUtils#newName
> ---
>
> Key: FLINK-27778
> URL: https://issues.apache.org/jira/browse/FLINK-27778
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System, Table SQL / Planner
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.16.0
>
>
> The {{table-planner}} has a direct dependency on the {{table-code-splitter}}, 
> as several CastRules use {{CodeSplitUtil.newName}}.
> This dependency is a bit hidden. In the IDE it is pulled in transitively via 
> {{table-runtime}}, and in maven it uses the {{table-code-splitter}} 
> dependency bundled by {{table-runtime}}.
> It would be nice if we could add a {{provided}} dependency to the 
> {{table-code-splitter}} to properly document this.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27778) table-planner should not use CodeSplitUtils#newName

2022-05-27 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-27778:


Assignee: Chesnay Schepler

> table-planner should not use CodeSplitUtils#newName
> ---
>
> Key: FLINK-27778
> URL: https://issues.apache.org/jira/browse/FLINK-27778
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System, Table SQL / Planner
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.16.0
>
>
> The {{table-planner}} has a direct dependency on the {{table-code-splitter}}, 
> as several CastRules use {{CodeSplitUtil.newName}}.
> This dependency is a bit hidden. In the IDE it is pulled in transitively via 
> {{table-runtime}}, and in maven it uses the {{table-code-splitter}} 
> dependency bundled by {{table-runtime}}.
> It would be nice if we could add a {{provided}} dependency to the 
> {{table-code-splitter}} to properly document this.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27815) Improve the join reorder strategy for batch sql job

2022-05-27 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-27815:
---
Description: 
Join is heavy operation in the execution, the join order in a query can have a 
significant impact on the query’s performance. 
Currently, the planner has one  join reorder strategy which is provided by 
Apache Calcite, and it strongly depends on the statistics.
It's better we can provide different join reorder strategies for different 
situations, such as:
1. provide a join reorder strategy without statistics, e.g. eliminate cross 
joins
2. improve current join reorders strategy with statistics
3. provide hints to allow users to choose join order strategy
4. ...

  was:
Join is heavy operation in the execution, the join order in a query can have a 
significant impact on the query’s performance. 
Currently, the planner has one  join reorder strategy which is provided by 
Apache Calcite,
but it strongly depends on the statistics. It's better we can provide different 
join reorder strategies for different situations, such as:
1. provide a join reorder strategy without statistics, e.g. eliminate cross 
joins
2. improve current join reorders strategy with statistics
3. provide hints to allow users to choose join order strategy
4. ...


> Improve the join reorder strategy for batch sql job 
> 
>
> Key: FLINK-27815
> URL: https://issues.apache.org/jira/browse/FLINK-27815
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>
> Join is heavy operation in the execution, the join order in a query can have 
> a significant impact on the query’s performance. 
> Currently, the planner has one  join reorder strategy which is provided by 
> Apache Calcite, and it strongly depends on the statistics.
> It's better we can provide different join reorder strategies for different 
> situations, such as:
> 1. provide a join reorder strategy without statistics, e.g. eliminate cross 
> joins
> 2. improve current join reorders strategy with statistics
> 3. provide hints to allow users to choose join order strategy
> 4. ...



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27778) table-planner should not use CodeSplitUtils#newName

2022-05-27 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-27778:
-
Summary: table-planner should not use CodeSplitUtils#newName  (was: 
table-planner should explicitly depend on table-code-splitter)

> table-planner should not use CodeSplitUtils#newName
> ---
>
> Key: FLINK-27778
> URL: https://issues.apache.org/jira/browse/FLINK-27778
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System, Table SQL / Planner
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.16.0
>
>
> The {{table-planner}} has a direct dependency on the {{table-code-splitter}}, 
> as several CastRules use {{CodeSplitUtil.newName}}.
> This dependency is a bit hidden. In the IDE it is pulled in transitively via 
> {{table-runtime}}, and in maven it uses the {{table-code-splitter}} 
> dependency bundled by {{table-runtime}}.
> It would be nice if we could add a {{provided}} dependency to the 
> {{table-code-splitter}} to properly document this.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27815) Improve the join reorder strategy for batch sql job

2022-05-27 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-27815:
--

Assignee: godfrey he

> Improve the join reorder strategy for batch sql job 
> 
>
> Key: FLINK-27815
> URL: https://issues.apache.org/jira/browse/FLINK-27815
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>
> Join is heavy operation in the execution, the join order in a query can have 
> a significant impact on the query’s performance. 
> Currently, the planner has one  join reorder strategy which is provided by 
> Apache Calcite,
> but it strongly depends on the statistics. It's better we can provide 
> different join reorder strategies for different situations, such as:
> 1. provide a join reorder strategy without statistics, e.g. eliminate cross 
> joins
> 2. improve current join reorders strategy with statistics
> 3. provide hints to allow users to choose join order strategy
> 4. ...



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27815) Improve the join reorder strategy for batch sql job

2022-05-27 Thread godfrey he (Jira)
godfrey he created FLINK-27815:
--

 Summary: Improve the join reorder strategy for batch sql job 
 Key: FLINK-27815
 URL: https://issues.apache.org/jira/browse/FLINK-27815
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Planner
Reporter: godfrey he


Join is heavy operation in the execution, the join order in a query can have a 
significant impact on the query’s performance. 
Currently, the planner has one  join reorder strategy which is provided by 
Apache Calcite,
but it strongly depends on the statistics. It's better we can provide different 
join reorder strategies for different situations, such as:
1. provide a join reorder strategy without statistics, e.g. eliminate cross 
joins
2. improve current join reorders strategy with statistics
3. provide hints to allow users to choose join order strategy
4. ...



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27812) Support Dynamic Change of Watched Namespaces

2022-05-27 Thread Matyas Orhidi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matyas Orhidi updated FLINK-27812:
--
Issue Type: New Feature  (was: Improvement)

> Support Dynamic Change of Watched Namespaces
> 
>
> Key: FLINK-27812
> URL: https://issues.apache.org/jira/browse/FLINK-27812
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Matyas Orhidi
>Priority: Major
> Fix For: kubernetes-operator-1.1.0
>
>
> The new feature [Dynamically Changing Target Namespaces 
> |https://javaoperatorsdk.io/docs/features#dynamically-changing-target-namespaces]
>  introduced in JOSDK v3 enables to listen to namespace changes without 
> restarting the operator. The watched namespaces are currently determined by 
> environment variables, and can be moved into the ConfigMap next to other 
> operator config parameters that are already handled dynamically.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27814) Add an abstraction layer for connectors to read and write row data instead of key-values

2022-05-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27814:
---
Labels: pull-request-available  (was: )

> Add an abstraction layer for connectors to read and write row data instead of 
> key-values
> 
>
> Key: FLINK-27814
> URL: https://issues.apache.org/jira/browse/FLINK-27814
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
>
> Currently {{FileStore}} exposes an interface for reading and writing 
> {{KeyValue}}. However connectors may have different ways to change a 
> {{RowData}} to {{KeyValue}} under different {{WriteMode}}. This results in 
> lots of {{if...else...}} branches and duplicated code.
> We need to add an abstraction layer for connectors to read and write row data 
> instead of key-values.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-kubernetes-operator] morhidi commented on a diff in pull request #244: [FLINK-27520] Use admission-controller-framework in Webhook

2022-05-27 Thread GitBox


morhidi commented on code in PR #244:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/244#discussion_r883354595


##
flink-kubernetes-webhook/pom.xml:
##
@@ -36,6 +36,19 @@ under the License.
 org.apache.flink
 flink-kubernetes-operator
 ${project.version}
+provided
+
+
+
+io.javaoperatorsdk
+operator-framework-framework-core
+${operator.sdk.admission-controller.version}
+
+
+*

Review Comment:
   The admission-controller-framework is depending on an older 
`io.fabric8:kubernetes-client`.  It had some collisions with the newer version 
that we put on the class-path with the the operator-shaded jar. Since this code 
worked in the operator before, it seemed safe to remove the dependencies and 
use them from the operator class-path. Hopefully the validation framework will 
merge with the operator sdk and share the dependencies.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] tsreaper opened a new pull request, #142: [FLINK-27814] Add an abstraction layer for connectors to read and write row data instead of key-values

2022-05-27 Thread GitBox


tsreaper opened a new pull request, #142:
URL: https://github.com/apache/flink-table-store/pull/142

   Currently `FileStore` exposes an interface for reading and writing 
`KeyValue`. However connectors may have different ways to change a `RowData` to 
`KeyValue` under different `WriteMode`. This results in lots of `if...else...` 
branches and duplicated code.
   
   We need to add an abstraction layer for connectors to read and write row 
data instead of key-values.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27778) table-planner should explicitly depend on table-code-splitter

2022-05-27 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542851#comment-17542851
 ] 

Chesnay Schepler commented on FLINK-27778:
--

Actually, it is not clear to me why the table-runtime bundles 
table-code-splitter in the first place.

> table-planner should explicitly depend on table-code-splitter
> -
>
> Key: FLINK-27778
> URL: https://issues.apache.org/jira/browse/FLINK-27778
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System, Table SQL / Planner
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.16.0
>
>
> The {{table-planner}} has a direct dependency on the {{table-code-splitter}}, 
> as several CastRules use {{CodeSplitUtil.newName}}.
> This dependency is a bit hidden. In the IDE it is pulled in transitively via 
> {{table-runtime}}, and in maven it uses the {{table-code-splitter}} 
> dependency bundled by {{table-runtime}}.
> It would be nice if we could add a {{provided}} dependency to the 
> {{table-code-splitter}} to properly document this.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27814) Add an abstraction layer for connectors to read and write row data instead of key-values

2022-05-27 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-27814:
---

 Summary: Add an abstraction layer for connectors to read and write 
row data instead of key-values
 Key: FLINK-27814
 URL: https://issues.apache.org/jira/browse/FLINK-27814
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: table-store-0.2.0
Reporter: Caizhi Weng


Currently {{FileStore}} exposes an interface for reading and writing 
{{KeyValue}}. However connectors may have different ways to change a 
{{RowData}} to {{KeyValue}} under different {{WriteMode}}. This results in lots 
of {{if...else...}} branches and duplicated code.

We need to add an abstraction layer for connectors to read and write row data 
instead of key-values.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-kubernetes-operator] morhidi commented on a diff in pull request #239: [FLINK-27714] Migrate to java-operator-sdk v3

2022-05-27 Thread GitBox


morhidi commented on code in PR #239:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/239#discussion_r883435675


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigManager.java:
##
@@ -63,7 +63,7 @@ public class FlinkConfigManager {
 private final AtomicLong defaultConfigVersion = new AtomicLong(0);
 
 private final LoadingCache cache;
-private Set namespaces = OperatorUtils.getWatchedNamespaces();
+private final Set namespaces = EnvUtils.getWatchedNamespaces();

Review Comment:
   https://issues.apache.org/jira/browse/FLINK-27812 <- Follow up ticket for 
dynamic namespaces



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] morhidi commented on a diff in pull request #239: [FLINK-27714] Migrate to java-operator-sdk v3

2022-05-27 Thread GitBox


morhidi commented on code in PR #239:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/239#discussion_r883435675


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigManager.java:
##
@@ -63,7 +63,7 @@ public class FlinkConfigManager {
 private final AtomicLong defaultConfigVersion = new AtomicLong(0);
 
 private final LoadingCache cache;
-private Set namespaces = OperatorUtils.getWatchedNamespaces();
+private final Set namespaces = EnvUtils.getWatchedNamespaces();

Review Comment:
   https://issues.apache.org/jira/browse/FLINK-27812



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27812) Support Dynamic Change of Watched Namespaces

2022-05-27 Thread Matyas Orhidi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matyas Orhidi updated FLINK-27812:
--
  Component/s: Kubernetes Operator
Fix Version/s: kubernetes-operator-1.1.0
Affects Version/s: kubernetes-operator-1.1.0
  Description: The new feature [Dynamically Changing Target Namespaces 
|https://javaoperatorsdk.io/docs/features#dynamically-changing-target-namespaces]
 introduced in JOSDK v3 enables to listen to namespace changes without 
restarting the operator. The watched namespaces are currently determined by 
environment variables, and can be moved into the ConfigMap next to other 
operator config parameters that are already handled dynamically.

> Support Dynamic Change of Watched Namespaces
> 
>
> Key: FLINK-27812
> URL: https://issues.apache.org/jira/browse/FLINK-27812
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Matyas Orhidi
>Priority: Major
> Fix For: kubernetes-operator-1.1.0
>
>
> The new feature [Dynamically Changing Target Namespaces 
> |https://javaoperatorsdk.io/docs/features#dynamically-changing-target-namespaces]
>  introduced in JOSDK v3 enables to listen to namespace changes without 
> restarting the operator. The watched namespaces are currently determined by 
> environment variables, and can be moved into the ConfigMap next to other 
> operator config parameters that are already handled dynamically.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27813) java.lang.IllegalStateException: after migration from statefun-3.1.1 to 3.2.0

2022-05-27 Thread Oleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr updated FLINK-27813:
--
Summary: java.lang.IllegalStateException: after migration from 
statefun-3.1.1 to 3.2.0  (was: java.lang.IllegalStateException: afte migration 
from statefun 3.1.1 to 3.2.0)

> java.lang.IllegalStateException: after migration from statefun-3.1.1 to 3.2.0
> -
>
> Key: FLINK-27813
> URL: https://issues.apache.org/jira/browse/FLINK-27813
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Affects Versions: statefun-3.2.0
>Reporter: Oleksandr
>Priority: Blocker
>
> Issue was met after migration to 3.2.0 
> {code:java}
> ts.execution-paymentManualValidationRequestEgress-egress, Sink: 
> ua.aval.payments.execution-norkomExecutionRequested-egress, Sink: 
> ua.aval.payments.execution-shareStatusToManualValidation-egress) (1/1)#15863 
> (98a1f6bef9a435ef9d0292fefeb64022) switched from RUNNING to FAILED with 
> failure cause: java.lang.IllegalStateException: Unable to parse Netty 
> transport spec.\n\tat 
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.parseTransportSpec(NettyRequestReplyClientFactory.java:56)\n\tat
>  
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.createTransportClient(NettyRequestReplyClientFactory.java:39)\n\tat
>  
> org.apache.flink.statefun.flink.core.httpfn.HttpFunctionProvider.functionOfType(HttpFunctionProvider.java:49)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:70)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:47)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.load(StatefulFunctionRepository.java:71)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.get(StatefulFunctionRepository.java:59)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.LocalFunctionGroup.enqueue(LocalFunctionGroup.java:48)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.Reductions.enqueue(Reductions.java:153)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.Reductions.apply(Reductions.java:148)\n\tat
>  
> org.apache.flink.statefun.flink.core.functions.FunctionGroupOperator.processElement(FunctionGroupOperator.java:90)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)\n\tat
>  
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)\n\tat
>  
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)\n\tat
>  
> org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.sendDownstream(FeedbackUnionOperator.java:180)\n\tat
>  
> org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.processElement(FeedbackUnionOperator.java:86)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)\n\tat
>  
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)\n\tat
>  
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)\n\tat
>  
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:496)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)\n\tat
>  
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)\n\tat
>  
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)\n\tat
>  
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937)\n\tat
>  org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)\n\tat 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)\n\tat 
> java.base/java.lang.Thread.run(Unknown Source)\nCaused by: 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
>  Time interval unit label 'm' does not match any of the recognized units: 
> DAYS: (d | day | days), HOURS: (h | hour | hours), MINUTES: (min | minute | 
> minutes), SECONDS: 

[GitHub] [flink] flinkbot commented on pull request #19832: [FLINK-27811][build][netty] Remove unused netty dependency

2022-05-27 Thread GitBox


flinkbot commented on PR #19832:
URL: https://github.com/apache/flink/pull/19832#issuecomment-1139437495

   
   ## CI report:
   
   * 2e879c6480c669616a7a1f81ed17ee54dbe01391 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27813) java.lang.IllegalStateException: afte migration from statefun 3.1.1 to 3.2.0

2022-05-27 Thread Oleksandr (Jira)
Oleksandr created FLINK-27813:
-

 Summary: java.lang.IllegalStateException: afte migration from 
statefun 3.1.1 to 3.2.0
 Key: FLINK-27813
 URL: https://issues.apache.org/jira/browse/FLINK-27813
 Project: Flink
  Issue Type: Bug
  Components: API / State Processor
Affects Versions: statefun-3.2.0
Reporter: Oleksandr


Issue was met after migration to 3.2.0 
{code:java}
ts.execution-paymentManualValidationRequestEgress-egress, Sink: 
ua.aval.payments.execution-norkomExecutionRequested-egress, Sink: 
ua.aval.payments.execution-shareStatusToManualValidation-egress) (1/1)#15863 
(98a1f6bef9a435ef9d0292fefeb64022) switched from RUNNING to FAILED with failure 
cause: java.lang.IllegalStateException: Unable to parse Netty transport 
spec.\n\tat 
org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.parseTransportSpec(NettyRequestReplyClientFactory.java:56)\n\tat
 
org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplyClientFactory.createTransportClient(NettyRequestReplyClientFactory.java:39)\n\tat
 
org.apache.flink.statefun.flink.core.httpfn.HttpFunctionProvider.functionOfType(HttpFunctionProvider.java:49)\n\tat
 
org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:70)\n\tat
 
org.apache.flink.statefun.flink.core.functions.PredefinedFunctionLoader.load(PredefinedFunctionLoader.java:47)\n\tat
 
org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.load(StatefulFunctionRepository.java:71)\n\tat
 
org.apache.flink.statefun.flink.core.functions.StatefulFunctionRepository.get(StatefulFunctionRepository.java:59)\n\tat
 
org.apache.flink.statefun.flink.core.functions.LocalFunctionGroup.enqueue(LocalFunctionGroup.java:48)\n\tat
 
org.apache.flink.statefun.flink.core.functions.Reductions.enqueue(Reductions.java:153)\n\tat
 
org.apache.flink.statefun.flink.core.functions.Reductions.apply(Reductions.java:148)\n\tat
 
org.apache.flink.statefun.flink.core.functions.FunctionGroupOperator.processElement(FunctionGroupOperator.java:90)\n\tat
 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)\n\tat
 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)\n\tat
 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)\n\tat
 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)\n\tat
 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)\n\tat
 
org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.sendDownstream(FeedbackUnionOperator.java:180)\n\tat
 
org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.processElement(FeedbackUnionOperator.java:86)\n\tat
 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)\n\tat
 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)\n\tat
 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)\n\tat
 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)\n\tat
 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:496)\n\tat
 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)\n\tat
 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)\n\tat
 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)\n\tat
 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)\n\tat
 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937)\n\tat 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)\n\tat 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)\n\tat 
java.base/java.lang.Thread.run(Unknown Source)\nCaused by: 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
 Time interval unit label 'm' does not match any of the recognized units: DAYS: 
(d | day | days), HOURS: (h | hour | hours), MINUTES: (min | minute | minutes), 
SECONDS: (s | sec | secs | second | seconds), MILLISECONDS: (ms | milli | 
millis | millisecond | milliseconds), MICROSECONDS: (µs | micro | micros | 
microsecond | microseconds), NANOSECONDS: (ns | nano | nanos | nanosecond | 
nanoseconds) (through reference chain: 
org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplySpec[\"timeouts\"]->org.apache.flink.statefun.flink.core.nettyclient.NettyRequestReplySpec$Timeouts[\"call\"])\n\tat
 

[jira] [Updated] (FLINK-27811) Remove netty dependency in flink-test-utils

2022-05-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27811:
---
Labels: pull-request-available  (was: )

> Remove netty dependency in flink-test-utils
> ---
>
> Key: FLINK-27811
> URL: https://issues.apache.org/jira/browse/FLINK-27811
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System, Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> For some reason we bundle a relocated version of netty in flink-test-utils. 
> AFAICT this should be unnecessary because nothing makes use of the relocated 
> version.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] zentol opened a new pull request, #19832: [FLINK-27811][build][netty] Remove unused netty dependency

2022-05-27 Thread GitBox


zentol opened a new pull request, #19832:
URL: https://github.com/apache/flink/pull/19832

   Nothing references the relocated netty, so we can safely remove this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >