[jira] [Assigned] (FLINK-24907) Enable Side Output for late events on IntervalJoins

2022-09-18 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-24907:
--

Assignee: chenyuzhi

> Enable Side Output for late events on IntervalJoins
> ---
>
> Key: FLINK-24907
> URL: https://issues.apache.org/jira/browse/FLINK-24907
> Project: Flink
>  Issue Type: Improvement
>Reporter: Andrew Straussman
>Assignee: chenyuzhi
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.17.0
>
>
> Currently it appears that interval joins silently drop late events: 
> [https://github.com/apache/flink/blob/83a2541475228a4ff9e9a9def4049fb742353549/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/co/IntervalJoinOperator.java#L231]
> It would be great if those events would instead get sent to a side output, 
> like you can enable for other windowed join operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24907) Enable Side Output for late events on IntervalJoins

2022-09-18 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-24907:
---
Fix Version/s: 1.17.0

> Enable Side Output for late events on IntervalJoins
> ---
>
> Key: FLINK-24907
> URL: https://issues.apache.org/jira/browse/FLINK-24907
> Project: Flink
>  Issue Type: Improvement
>Reporter: Andrew Straussman
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.17.0
>
>
> Currently it appears that interval joins silently drop late events: 
> [https://github.com/apache/flink/blob/83a2541475228a4ff9e9a9def4049fb742353549/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/co/IntervalJoinOperator.java#L231]
> It would be great if those events would instead get sent to a side output, 
> like you can enable for other windowed join operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-15635) Allow passing a ClassLoader to EnvironmentSettings

2022-09-18 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-15635.
---
Release Note: TableEnvironment introduces a user class loader to have a 
consistent class loading behavior in table programs, SQL Client and SQL 
Gateway. The user classloader manages all user jars such as jar added by `ADD 
JAR` or `CREATE FUNCTION .. USING JAR ..` statements. User-defined 
functions/connectors/catalogs should replace 
`Thread.currentThread().getContextClassLoader()` with the user class loader to 
load classes. Otherwise, ClassNotFoundException maybe thrown. The user class 
loader can be accessed via `FunctionContext#getUserCodeClassLoader`, 
`DynamicTableFactory.Context#getClassLoader` and 
`CatalogFactory.Context#getClassLoader`.
  Resolution: Fixed

> Allow passing a ClassLoader to EnvironmentSettings
> --
>
> Key: FLINK-15635
> URL: https://issues.apache.org/jira/browse/FLINK-15635
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Francesco Guardiani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> We had a couple of class loading issues in the past because people forgot to 
> use the right classloader in {{flink-table}}. The SQL Client executor code 
> hacks a classloader into the planner process by using {{wrapClassLoader}} 
> that sets the threads context classloader.
> Instead we should allow passing a class loader to environment settings. This 
> class loader can be passed to the planner and can be stored in table 
> environment, table config, etc. to have a consistent class loading behavior.
> Having this in place should replace the need for 
> {{Thread.currentThread().getContextClassLoader()}} in the entire 
> {{flink-table}} module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-24907) Enable Side Output for late events on IntervalJoins

2022-09-18 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-24907.
--
Resolution: Fixed

merged to master: b515da4409c925f07c7e116c26a2680ab7de8a8d

> Enable Side Output for late events on IntervalJoins
> ---
>
> Key: FLINK-24907
> URL: https://issues.apache.org/jira/browse/FLINK-24907
> Project: Flink
>  Issue Type: Improvement
>Reporter: Andrew Straussman
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
>
> Currently it appears that interval joins silently drop late events: 
> [https://github.com/apache/flink/blob/83a2541475228a4ff9e9a9def4049fb742353549/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/co/IntervalJoinOperator.java#L231]
> It would be great if those events would instead get sent to a side output, 
> like you can enable for other windowed join operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] gyfora merged pull request #18118: [FLINK-24907] Support side out late data for interval join

2022-09-18 Thread GitBox


gyfora merged PR #18118:
URL: https://github.com/apache/flink/pull/18118


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29131) Kubernetes operator webhook can use hostPort

2022-09-18 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606445#comment-17606445
 ] 

Gyula Fora commented on FLINK-29131:


Thank you for exploring the options [~dylanmei] . Do we need a seperate 
deployment or a separate pod is enough within the same deployment?

> Kubernetes operator webhook can use hostPort
> 
>
> Key: FLINK-29131
> URL: https://issues.apache.org/jira/browse/FLINK-29131
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Dylan Meissner
>Assignee: Dylan Meissner
>Priority: Major
> Fix For: kubernetes-operator-1.2.0
>
>
> When running Flink operator on EKS cluster with Calico networking the 
> control-plane (managed by AWS) cannot reach the webhook. Requests to create 
> Flink resources fail with {_}Address is not allowed{_}.
> When the webhook listens on hostPort the requests to create Flink resources 
> are successful. However, a pod security policy is generally required to allow 
> webhook to listen on such ports.
> To support this scenario with the Helm chart make changes so that we can
>  * Specify a hostPort value for the webhook
>  * Name the port that the webhook listens on
>  * Use the named port in the webhook service
>  * Add a "use" pod security policy verb to cluster role



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-15635) Allow passing a ClassLoader to EnvironmentSettings

2022-09-18 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reopened FLINK-15635:
-

> Allow passing a ClassLoader to EnvironmentSettings
> --
>
> Key: FLINK-15635
> URL: https://issues.apache.org/jira/browse/FLINK-15635
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Francesco Guardiani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> We had a couple of class loading issues in the past because people forgot to 
> use the right classloader in {{flink-table}}. The SQL Client executor code 
> hacks a classloader into the planner process by using {{wrapClassLoader}} 
> that sets the threads context classloader.
> Instead we should allow passing a class loader to environment settings. This 
> class loader can be passed to the planner and can be stored in table 
> environment, table config, etc. to have a consistent class loading behavior.
> Having this in place should replace the need for 
> {{Thread.currentThread().getContextClassLoader()}} in the entire 
> {{flink-table}} module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] lsyldliu commented on pull request #20630: [FLINK-29023][docs][table] Updating Jar statement document

2022-09-18 Thread GitBox


lsyldliu commented on PR #20630:
URL: https://github.com/apache/flink/pull/20630#issuecomment-1250612424

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gaborgsomogyi commented on a diff in pull request #369: [FLINK-29256] Add docs for k8s operator webhook

2022-09-18 Thread GitBox


gaborgsomogyi commented on code in PR #369:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/369#discussion_r973867967


##
docs/content/docs/concepts/architecture.md:
##
@@ -63,3 +63,18 @@ The Operator manages the lifecycle of Flink resources. The 
following chart illus
   - ROLLING_BACK : The resource is being rolled back to the last stable spec
   - ROLLED_BACK : The resource is deployed with the last stable spec
   - FAILED : The job terminally failed
+
+## Admission Control
+
+In addition to compiled-in admission plugins, a custom admission plugin named 
Flink Kubernetes Operator Webhook (Webhook)
+can be started as extension and run as webhook.
+
+The Webhook follow the Kubernetes principles, notably the [dynamic admission 
control](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/).
+
+It's deployed by default when the Operator installed on a Kubernetes cluster 
using [Helm](https://helm.sh).

Review Comment:
   Fixed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29329) Checkpoint can not be triggered if encountering OOM

2022-09-18 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606413#comment-17606413
 ] 

Yun Tang commented on FLINK-29329:
--

Do you mean once we incurred  OOM, then no checkpoints would be triggered by JM 
or the next triggered checkpoints could lead to OOM problem each time?

> Checkpoint can not be triggered if encountering OOM
> ---
>
> Key: FLINK-29329
> URL: https://issues.apache.org/jira/browse/FLINK-29329
> Project: Flink
>  Issue Type: Bug
>Reporter: Yuxin Tan
>Priority: Major
> Fix For: 1.13.7
>
> Attachments: job-exceptions-1.txt
>
>
> When writing a checkpoint, an OOM error is thrown. But the JM is not failed 
> and is restored because I found a log "No master state to restore".
> Then JM never makes checkpoints anymore. Currently, the root cause is not 
> that clear, maybe this is a bug and we should deal with the OOM or other 
> exceptions when making checkpoints.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29328) 【flink在使用状态过期设置时出现问题】

2022-09-18 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606408#comment-17606408
 ] 

Yun Tang commented on FLINK-29328:
--

[~Jason_H] Please use English to update this ticket.

> 【flink在使用状态过期设置时出现问题】
> -
>
> Key: FLINK-29328
> URL: https://issues.apache.org/jira/browse/FLINK-29328
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.14.3
> Environment: !报错1.jpg!!报错2.jpg!
>Reporter: Jason
>Priority: Minor
> Attachments: 报错1.jpg, 报错2.jpg
>
>
> 本人是基于flink1.14.3的版本使用时出现如下的问题,在第一次完成一个Flink作业时,添加了TTL的设置,然后启动作业后,在某一次作业出现问题自动恢复时,报如下错误,具体见附件图片,最终修复的方法是,在创建状态描述器是改变了写法,如下所示:
> 报错之前的写法:
> {code:java}
> public static final MapStateDescriptor 
> quantityJudgeStateDescriptor = new MapStateDescriptor<>(
> "quantityJudgeMapState",
> String.class,
> Integer.class); {code}
> 报错之后的写法:
> {code:java}
> public static final MapStateDescriptor 
> rateAlgorithmStateProperties = new MapStateDescriptor<>(
> "rateAlgorithmMapState",
> TypeInformation.of(new TypeHint() {
> }),
> TypeInformation.of(new TypeHint() {
> })
> ); {code}
> 改成之后的这种写法后,测试没有出现上述的问题,暂不知道是否是bug问题,提此问题,以追根溯源。
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29280) Join hint are not propagated in subquery

2022-09-18 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-29280:
---
Fix Version/s: 1.16.0
   1.17.0

> Join hint are not propagated in subquery
> 
>
> Key: FLINK-29280
> URL: https://issues.apache.org/jira/browse/FLINK-29280
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> Add the following code in JoinHintTestBase to re-produce this bug.
> {code:java}
> @Test
> public void testJoinHintWithJoinHintInSubQuery() {
> String sql =
> "select * from T1 WHERE a1 IN (select /*+ %s(T2) */ a2 from T2 
> join T3 on T2.a2 = T3.a3)";
> verifyRelPlanByCustom(String.format(sql, 
> buildCaseSensitiveStr(getTestSingleJoinHint(;
> } {code}
> This is because that calcite will not propagate the hint in subquery and 
> flink also doesn't resolve it in FlinkSubQueryRemoveRule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29328) 【flink在使用状态过期设置时出现问题】

2022-09-18 Thread Yanfei Lei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606407#comment-17606407
 ] 

Yanfei Lei commented on FLINK-29328:


>From your attachments, this issue is caused by state incompatibility. Did you 
>add some new states when you configurate TTL? 

(从附件中的log来看,是restore前后state不适配引起的。您在配置TTL时在代码中加了新的state吗?

> 【flink在使用状态过期设置时出现问题】
> -
>
> Key: FLINK-29328
> URL: https://issues.apache.org/jira/browse/FLINK-29328
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.14.3
> Environment: !报错1.jpg!!报错2.jpg!
>Reporter: Jason
>Priority: Minor
> Attachments: 报错1.jpg, 报错2.jpg
>
>
> 本人是基于flink1.14.3的版本使用时出现如下的问题,在第一次完成一个Flink作业时,添加了TTL的设置,然后启动作业后,在某一次作业出现问题自动恢复时,报如下错误,具体见附件图片,最终修复的方法是,在创建状态描述器是改变了写法,如下所示:
> 报错之前的写法:
> {code:java}
> public static final MapStateDescriptor 
> quantityJudgeStateDescriptor = new MapStateDescriptor<>(
> "quantityJudgeMapState",
> String.class,
> Integer.class); {code}
> 报错之后的写法:
> {code:java}
> public static final MapStateDescriptor 
> rateAlgorithmStateProperties = new MapStateDescriptor<>(
> "rateAlgorithmMapState",
> TypeInformation.of(new TypeHint() {
> }),
> TypeInformation.of(new TypeHint() {
> })
> ); {code}
> 改成之后的这种写法后,测试没有出现上述的问题,暂不知道是否是bug问题,提此问题,以追根溯源。
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29280) Join hint are not propagated in subquery

2022-09-18 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-29280.
--
Resolution: Fixed

Fixed in master: 22cb554008320e6684280b5205f93d7a6f685c6c
in 1.16.0: b37a8153f22b62982ca144604a34056246f6f36c

> Join hint are not propagated in subquery
> 
>
> Key: FLINK-29280
> URL: https://issues.apache.org/jira/browse/FLINK-29280
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> Add the following code in JoinHintTestBase to re-produce this bug.
> {code:java}
> @Test
> public void testJoinHintWithJoinHintInSubQuery() {
> String sql =
> "select * from T1 WHERE a1 IN (select /*+ %s(T2) */ a2 from T2 
> join T3 on T2.a2 = T3.a3)";
> verifyRelPlanByCustom(String.format(sql, 
> buildCaseSensitiveStr(getTestSingleJoinHint(;
> } {code}
> This is because that calcite will not propagate the hint in subquery and 
> flink also doesn't resolve it in FlinkSubQueryRemoveRule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29280) Join hint are not propagated in subquery

2022-09-18 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-29280:
--

Assignee: xuyang

> Join hint are not propagated in subquery
> 
>
> Key: FLINK-29280
> URL: https://issues.apache.org/jira/browse/FLINK-29280
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
>
> Add the following code in JoinHintTestBase to re-produce this bug.
> {code:java}
> @Test
> public void testJoinHintWithJoinHintInSubQuery() {
> String sql =
> "select * from T1 WHERE a1 IN (select /*+ %s(T2) */ a2 from T2 
> join T3 on T2.a2 = T3.a3)";
> verifyRelPlanByCustom(String.format(sql, 
> buildCaseSensitiveStr(getTestSingleJoinHint(;
> } {code}
> This is because that calcite will not propagate the hint in subquery and 
> flink also doesn't resolve it in FlinkSubQueryRemoveRule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] godfreyhe closed pull request #20823: [FLINK-29280][table-planner] fix join hints could not be propagated in subquery

2022-09-18 Thread GitBox


godfreyhe closed pull request #20823: [FLINK-29280][table-planner] fix join 
hints could not be propagated in subquery
URL: https://github.com/apache/flink/pull/20823


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29329) Checkpoint can not be triggered if encountering OOM

2022-09-18 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan updated FLINK-29329:
--
Attachment: job-exceptions-1.txt

> Checkpoint can not be triggered if encountering OOM
> ---
>
> Key: FLINK-29329
> URL: https://issues.apache.org/jira/browse/FLINK-29329
> Project: Flink
>  Issue Type: Bug
>Reporter: Yuxin Tan
>Priority: Major
> Fix For: 1.13.7
>
> Attachments: job-exceptions-1.txt
>
>
> When writing a checkpoint, an OOM error is thrown. But the JM is not failed 
> and is restored because I found a log "No master state to restore".
> Then JM never makes checkpoints anymore. Currently, the root cause is not 
> that clear, maybe this is a bug and we should deal with the OOM or other 
> exceptions when making checkpoints.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] lincoln-lil commented on pull request #20800: [FLINK-28850][table-planner] Support table alias in LOOKUP hint

2022-09-18 Thread GitBox


lincoln-lil commented on PR #20800:
URL: https://github.com/apache/flink/pull/20800#issuecomment-1250520693

   > @flinkbot run azure
   
   failed by a known case:  https://issues.apache.org/jira/browse/FLINK-29315
   
   
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41103&view=logs&j=ab8d7049-0920-5f52-eeed-e548522c2880&t=a7d66b08-71af-51c3-2721-4b90fa4f94e3


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29329) Checkpoint can not be triggered if encountering OOM

2022-09-18 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan updated FLINK-29329:
--
Attachment: (was: job-exceptions.txt)

> Checkpoint can not be triggered if encountering OOM
> ---
>
> Key: FLINK-29329
> URL: https://issues.apache.org/jira/browse/FLINK-29329
> Project: Flink
>  Issue Type: Bug
>Reporter: Yuxin Tan
>Priority: Major
> Fix For: 1.13.7
>
>
> When writing a checkpoint, an OOM error is thrown. But the JM is not failed 
> and is restored because I found a log "No master state to restore".
> Then JM never makes checkpoints anymore. Currently, the root cause is not 
> that clear, maybe this is a bug and we should deal with the OOM or other 
> exceptions when making checkpoints.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29329) Checkpoint can not be triggered if encountering OOM

2022-09-18 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan updated FLINK-29329:
--
Description: 
When writing a checkpoint, an OOM error is thrown. But the JM is not failed and 
is restored because I found a log "No master state to restore".

Then JM never makes checkpoints anymore. Currently, the root cause is not that 
clear, maybe this is a bug and we should deal with the OOM or other exceptions 
when making checkpoints.

  was:
When writing a checkpoint, an OOM error is thrown. But the JM is not failed and 
is restored because I found a log "No master state to restore".

Then JM never makes checkpoints anymore. Currently, the root cause is not that 
clear, maybe this is a bug and we should deal with the OOM or other exceptions 
when making checkpoints.

 

[^job-exceptions.txt]


> Checkpoint can not be triggered if encountering OOM
> ---
>
> Key: FLINK-29329
> URL: https://issues.apache.org/jira/browse/FLINK-29329
> Project: Flink
>  Issue Type: Bug
>Reporter: Yuxin Tan
>Priority: Major
> Fix For: 1.13.7
>
>
> When writing a checkpoint, an OOM error is thrown. But the JM is not failed 
> and is restored because I found a log "No master state to restore".
> Then JM never makes checkpoints anymore. Currently, the root cause is not 
> that clear, maybe this is a bug and we should deal with the OOM or other 
> exceptions when making checkpoints.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27603) Hive dialect supports "reload function"

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-27603:
-
Fix Version/s: (was: 1.16.0)
   1.17.0

> Hive dialect supports "reload function"
> ---
>
> Key: FLINK-27603
> URL: https://issues.apache.org/jira/browse/FLINK-27603
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> In Hive, by "reload function", the user can use the function defined in other 
> session.
> But in flink, it will always get the function from the catalog, and if it's 
> hive catalog, it'll load all the user defined functions from the metastore.
> So, there's no need to "reload function" explicitly for it actually is done 
> implicitly.
> But when use Hive dialect in Flink, it'll throw an unsupported exception for  
> the statement  "reload function".
> I'm wondering keep the current behavior to throw excpetion or consider it as 
> an 'NopOperation'.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-28131) FLIP-168: Speculative Execution for Batch Job

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-28131.

Resolution: Done

> FLIP-168: Speculative Execution for Batch Job
> -
>
> Key: FLINK-28131
> URL: https://issues.apache.org/jira/browse/FLINK-28131
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Coordination
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
> Fix For: 1.16.0
>
>
> Speculative executions is helpful to mitigate slow tasks caused by 
> problematic nodes. The basic idea is to start mirror tasks on other nodes 
> when a slow task is detected. The mirror task processes the same input data 
> and produces the same data as the original task. 
> More detailed can be found in 
> [FLIP-168|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-168%3A+Speculative+Execution+for+Batch+Job].]
>  
> This is the umbrella ticket to track all the changes of this feature.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13910) Many serializable classes have no explicit 'serialVersionUID'

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-13910:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Many serializable classes have no explicit 'serialVersionUID'
> -
>
> Key: FLINK-13910
> URL: https://issues.apache.org/jira/browse/FLINK-13910
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Reporter: Yun Tang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.17.0
>
> Attachments: SerializableNoSerialVersionUIDField, 
> classes-without-uid-per-module, serializable-classes-without-uid-5249249
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, many serializable classes in Flink have no explicit 
> 'serialVersionUID'. As [official 
> doc|https://flink.apache.org/contributing/code-style-and-quality-java.html#java-serialization]
>  said, {{Serializable classes must define a Serial Version UID}}. 
> No 'serialVersionUID' would cause compatibility problem. Take 
> {{TwoPhaseCommitSinkFunction}} for example, since no explicit 
> 'serialVersionUID' defined, after 
> [FLINK-10455|https://github.com/apache/flink/commit/489be82a6d93057ed4a3f9bf38ef50d01d11d96b]
>  introduced, its default 'serialVersionUID' has changed from 
> "4584405056408828651" to "4064406918549730832". In other words, if we submit 
> a job from Flink-1.6.3 local home to remote Flink-1.6.2 cluster with the 
> usage of {{TwoPhaseCommitSinkFunction}}, we would get exception like:
> {code:java}
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot 
> instantiate user function.
> at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperator(StreamConfig.java:239)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:104)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:267)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.InvalidClassException: 
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction; 
> local class incompatible: stream classdesc serialVersionUID = 
> 4584405056408828651, local class serialVersionUID = 4064406918549730832
> at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:699)
> at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
> at 
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
> at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
> at 
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:537)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:524)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:512)
> at 
> org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:473)
> at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperator(StreamConfig.java:224)
> ... 4 more
> {code}
> Similar problems existed in  
> {{org.apache.flink.streaming.api.operators.SimpleOperatorFactory}} which has 
> different 'serialVersionUID' from release-1.9 and current master branch.
> IMO, we might have two options to fix this bug:
> # Add explicit serialVersionUID for those classes which is identical to 
> latest Flink-1.9.0 release code.
> # Use similar mechanism like {{FailureTolerantObjectInputStream}} in 
> {{InstantiationUtil}} to ignore serialVersionUID mismatch.
> I have collected all production classes without serialVersionUID from latest 
> master branch in the attachment, which counts to 639 classes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29329) Checkpoint can not be triggered if encountering OOM

2022-09-18 Thread Yuxin Tan (Jira)
Yuxin Tan created FLINK-29329:
-

 Summary: Checkpoint can not be triggered if encountering OOM
 Key: FLINK-29329
 URL: https://issues.apache.org/jira/browse/FLINK-29329
 Project: Flink
  Issue Type: Bug
Reporter: Yuxin Tan
 Fix For: 1.13.7
 Attachments: job-exceptions.txt

When writing a checkpoint, an OOM error is thrown. But the JM is not failed and 
is restored because I found a log "No master state to restore".

Then JM never makes checkpoints anymore. Currently, the root cause is not that 
clear, maybe this is a bug and we should deal with the OOM or other exceptions 
when making checkpoints.

 

[^job-exceptions.txt]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-21375) Refactor HybridMemorySegment

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-21375:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Refactor HybridMemorySegment
> 
>
> Key: FLINK-21375
> URL: https://issues.apache.org/jira/browse/FLINK-21375
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.17.0
>
>
> Per the discussion in [this PR|https://github.com/apache/flink/pull/14904], 
> we plan to refactor {{HybridMemorySegment}} as follows.
> * Separate into memory type specific implementations: heap / direct / native 
> (unsafe)
> * Remove {{wrap()}}, replacing with {{processAsByteBuffer()}}
> * Remove native memory cleaner logic



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-18986) KubernetesSessionCli creates RestClusterClient for detached deployments

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-18986:
-
Fix Version/s: (was: 1.16.0)
   1.17.0

> KubernetesSessionCli creates RestClusterClient for detached deployments
> ---
>
> Key: FLINK-18986
> URL: https://issues.apache.org/jira/browse/FLINK-18986
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.11.0
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-unassigned, pull-request-available, usability
> Fix For: 1.17.0
>
>
> The {{KubernetesSessionCli}} creates a {{ClusterClient}} for retrieving the 
> {{clusterId}} if the cluster was just started.
> However, this {{clusterId}} is only used in attached executions.
> For detached deployments where {{kubernetes.rest-service.exposed.type}} is 
> set to {{ClusterIP}} this results in unnecessary error messages about the 
> {{RestClusterClient}} not being able to be created.
> Given that there is no need to create the client in this situation, we should 
> skip this step.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25444) ClosureCleaner rejects ExecutionConfig as not be Serializable due to TextElement

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-25444:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> ClosureCleaner rejects ExecutionConfig as not be Serializable due to 
> TextElement
> 
>
> Key: FLINK-25444
> URL: https://issues.apache.org/jira/browse/FLINK-25444
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.14.0
>Reporter: Wen Zhou
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.17.0
>
>
> This should be a bug introduced by the latest flink commit of file 
> [flink-core/src/main/java/org/apache/flink/api/common/ExecutionConfig.java|https://github.com/apache/flink/commit/9e0e0929b86c372c9243daad9d654af9e7718708#diff-7a439abdf207cf6da8aa6c147b38c1346820fe786afbf652bc614fc377cdf238]
> Diff the file, we can find TextElement is used here where ClosureCleanerLevel 
> is is used as a memeber of Serializable ExecutionConfig.
> [TextElement in ClosureCleanerLevel|https://i.stack.imgur.com/ky3d8.png]
> The simplest way to verify the problem is running code as followings in flink 
> 1.13.5 and 1.14.x, the exception is reproduced in 1.14.x . And the diff 
> between 1.13.5 and 1.14.x is only lates commit
>  
> @Test
> public void testExecutionConfigSerializable() throws Exception {
> ExecutionConfig config = new ExecutionConfig();
> ClosureCleaner.clean(config, ExecutionConfig.ClosureCleanerLevel.RECURSIVE, 
> true);
> }
>  
> The problem can be found here 
> [https://stackoverflow.com/questions/70443743/flink-blockelement-exception-when-updating-to-version-1-14-2/70468925?noredirect=1#comment124597510_70468925]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-21556) StreamingKafkaITCase hangs on azure

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-21556.

Resolution: Fixed

> StreamingKafkaITCase hangs on azure
> ---
>
> Key: FLINK-21556
> URL: https://issues.apache.org/jira/browse/FLINK-21556
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13966&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27391) Support Hive bucket table

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-27391:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Support Hive bucket table
> -
>
> Key: FLINK-27391
> URL: https://issues.apache.org/jira/browse/FLINK-27391
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.13.1, 1.15.0
>Reporter: tartarus
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> Support Hive bucket table



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27242) Support RENAME PARTITION statement for partitioned table

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-27242:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Support RENAME PARTITION statement for partitioned table
> 
>
> Key: FLINK-27242
> URL: https://issues.apache.org/jira/browse/FLINK-27242
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Assignee: dalongliu
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27243) Support SHOW PARTITIONS statement for partitioned table

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-27243:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Support SHOW PARTITIONS statement for partitioned table
> ---
>
> Key: FLINK-27243
> URL: https://issues.apache.org/jira/browse/FLINK-27243
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Assignee: dalongliu
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27241) Support DROP PARTITION statement for partitioned table

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-27241:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Support DROP PARTITION statement for partitioned table
> --
>
> Key: FLINK-27241
> URL: https://issues.apache.org/jira/browse/FLINK-27241
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Assignee: dalongliu
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27240) Support ADD PARTITION statement for partitioned table

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-27240:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Support ADD PARTITION statement for partitioned table
> -
>
> Key: FLINK-27240
> URL: https://issues.apache.org/jira/browse/FLINK-27240
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Assignee: dalongliu
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26968) Bump CopyOnWriteStateMap entry version before write

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-26968:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Bump CopyOnWriteStateMap entry version before write
> ---
>
> Key: FLINK-26968
> URL: https://issues.apache.org/jira/browse/FLINK-26968
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> CopyOnWriteStateMap copies the entry before returning it to the client
> for update. This also updates its state and entry versions.
> However, if the entry is NOT used by any snapshots, the versions
> will stay the same despite that state is going to be updated.
> With incremental checkpoints, this causes such updated version to
> be ignored in the next snapshot.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26967) Fix race condition in CopyOnWriteStateMap

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-26967:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Fix race condition in CopyOnWriteStateMap
> -
>
> Key: FLINK-26967
> URL: https://issues.apache.org/jira/browse/FLINK-26967
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> Currently, handling chained entry on copy-on-write uses object
> identity to find the wanted entry in the chain. However, if
> the same method is running concurrently, the object in the chain
> can be replaced by its copy; the condition will never be met and
> the chain end will be reached, causing an NPE.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26965) Allow reuse of PeriodicMaterializationManager

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-26965:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Allow reuse of PeriodicMaterializationManager
> -
>
> Key: FLINK-26965
> URL: https://issues.apache.org/jira/browse/FLINK-26965
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> The same approach that is used with Changelog can be
> used with other state backends too,
> in particular IncrementalHeapStateBackend (FLIP-151).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26964) Notify CheckpointStrategy about checkpoint completion/abortion

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-26964:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Notify CheckpointStrategy about checkpoint completion/abortion
> --
>
> Key: FLINK-26964
> URL: https://issues.apache.org/jira/browse/FLINK-26964
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> Notifications could be used by incremental snapshot strategy
> to replace state handles with placeholders.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26963) Allow HeapBackend creation customization

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-26963:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Allow HeapBackend creation customization
> 
>
> Key: FLINK-26963
> URL: https://issues.apache.org/jira/browse/FLINK-26963
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> Incremental Heap State Backend needs to customize
> SnapshotStrategy and RestoreOperation and their components.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22956) Stream Over TTL should use enableTimeToLive of state instead of timer

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-22956:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Stream Over TTL should use enableTimeToLive of state instead of timer
> -
>
> Key: FLINK-22956
> URL: https://issues.apache.org/jira/browse/FLINK-22956
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23815) Separate the concerns of PendingCheckpoint and CheckpointPlan

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-23815:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Separate the concerns of PendingCheckpoint and CheckpointPlan
> -
>
> Key: FLINK-23815
> URL: https://issues.apache.org/jira/browse/FLINK-23815
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> As discussed in 
> https://github.com/apache/flink/pull/16655#issuecomment-899603149:
> {quote}
> When dealing with a PendingCheckpoint, should one be forced to also deal with 
> a CheckpointPlan at all? I think not, the PendingCheckpoint is purely a 
> tracker of gathered states and how many tasks have acknowledged and how many 
> we are still waiting for. That makes testing, reusability, etc. simple.
> That means we should strive to have the PendingCheckpoint independent of 
> CheckpointPlan. I think we can achieve this with the following steps:
> Much of the reason to have the CheckpointPlan in PendingCheckpoint is 
> because the CheckpointCoordinator often requires access to the checkpoint 
> plan for a pending checkpoint. And it just uses the PendingCheckpoint as a 
> convenient way to store the CheckpointPlan.
> A cleaner solution would be changing in CheckpointCoordinator the 
> Map pendingCheckpoints to a Map Tuple2> pendingCheckpoints. The 
> CheckpointCoordinator would keep the two things it frequently needs together 
> in a tuple, rather than storing one in the other and thus expanding the 
> scope/concerns of PendingCheckpoint.
> The above change should allow us to reduce the interface of 
> PendingCheckpoint.checkpointPlan to 
> PendingCheckpointFinishedTaskStateProvider.
> The interface PendingCheckpointFinishedTaskStateProvider has some methods 
> that are about tracking state, not just about giving access to finished 
> state: void reportTaskFinishedOnRestore(ExecutionVertex task) and void 
> reportTaskHasFinishedOperators(ExecutionVertex task);.
> I am wondering if those two would not be more cleanly handled outside the 
> PendingCheckpoint as well. Meaning in the method 
> CheckpointCoordinator.receiveAcknowledgeMessage() we handle the reporting of 
> finished operators to the CheckpointPlan. The method 
> PendingCheckpoint.acknowledgeTask() only deals with the states.
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-16388) Bump orc-core to 1.4.5

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-16388:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Bump orc-core to 1.4.5
> --
>
> Key: FLINK-16388
> URL: https://issues.apache.org/jira/browse/FLINK-16388
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-18229) Pending worker requests should be properly cleared

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-18229:
-
Fix Version/s: (was: 1.16.0)
   1.17.0

> Pending worker requests should be properly cleared
> --
>
> Key: FLINK-18229
> URL: https://issues.apache.org/jira/browse/FLINK-18229
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes, Deployment / YARN, Runtime / 
> Coordination
>Affects Versions: 1.9.3, 1.10.1, 1.11.0
>Reporter: Xintong Song
>Assignee: Weihua Hu
>Priority: Major
> Fix For: 1.17.0
>
>
> Currently, if Kubernetes/Yarn does not have enough resources to fulfill 
> Flink's resource requirement, there will be pending pod/container requests on 
> Kubernetes/Yarn. These pending resource requirements are never cleared until 
> either fulfilled or the Flink cluster is shutdown.
> However, sometimes Flink no longer needs the pending resources. E.g., the 
> slot request is then fulfilled by another slots that become available, or the 
> job failed due to slot request timeout (in a session cluster). In such cases, 
> Flink does not remove the resource request until the resource is allocated, 
> then it discovers that it no longer needs the allocated resource and release 
> them. This would affect the underlying Kubernetes/Yarn cluster, especially 
> when the cluster is under heavy workload.
> It would be good for Flink to cancel pod/container requests as earlier as 
> possible if it can discover that some of the pending workers are no longer 
> needed.
> There are several approaches potentially achieve this.
>  # We can always check whether there's a pending worker that can be canceled 
> when a \{{PendingTaskManagerSlot}} is unassigned.
>  # We can have a separate timeout for requesting new worker. If the resource 
> cannot be allocated within the given time since requested, we should cancel 
> that resource request and claim a resource allocation failure.
>  # We can share the same timeout for starting new worker (proposed in 
> FLINK-13554). This is similar to 2), but it requires the worker to be 
> registered, rather than allocated, before timeout.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-21784) Allow HeapBackend RestoreOperation customization

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-21784:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Allow HeapBackend RestoreOperation customization
> 
>
> Key: FLINK-21784
> URL: https://issues.apache.org/jira/browse/FLINK-21784
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> Allow customization of HeapRestoreOperation
> to be able to restore from incremental checkpoints.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-21783) Allow HeapBackend snapshotting customization

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-21783:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Allow HeapBackend snapshotting customization
> 
>
> Key: FLINK-21783
> URL: https://issues.apache.org/jira/browse/FLINK-21783
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> Allow customization of snapshotting of HeapKeyedStateBackend
> so that it can be snapshotted incrementally.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-21649) Allow extension of CopyOnWrite state classes

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-21649:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Allow extension of CopyOnWrite state classes
> 
>
> Key: FLINK-21649
> URL: https://issues.apache.org/jira/browse/FLINK-21649
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> Motivation: allow extension by incremental counterparts



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-16952) Parquet file system format support filter pushdown

2022-09-18 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-16952:
-
Fix Version/s: 1.17.0
   (was: 1.16.0)

> Parquet file system format support filter pushdown
> ---
>
> Key: FLINK-16952
> URL: https://issues.apache.org/jira/browse/FLINK-16952
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Jingsong Lee
>Assignee: luoyuxia
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> We can create the conversion between Flink Expression(NOTE: should be new 
> Expression instead of PlannerExpression) and parquet FilterPredicate.
> And apply to Parquet file system format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-28738) [doc] Add a user doc about the correctness for non-deterministic updates

2022-09-18 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-28738.
--
Resolution: Fixed

Fixed in master: a02b2c232ea2fb1b22bd0e5c290e4c6f0217549b
in 1.16.0: be73c9695d01e3d3164b6c89342a1e41fa4ea450

> [doc] Add a user doc about the correctness for non-deterministic updates
> 
>
> Key: FLINK-28738
> URL: https://issues.apache.org/jira/browse/FLINK-28738
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / Planner
>Reporter: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] godfreyhe closed pull request #20679: [FLINK-28738][table-planner] Adds a user doc about the determinism in streaming

2022-09-18 Thread GitBox


godfreyhe closed pull request #20679: [FLINK-28738][table-planner] Adds a user 
doc about the determinism in streaming
URL: https://github.com/apache/flink/pull/20679


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-29328) 【flink在使用状态过期设置时出现问题】

2022-09-18 Thread Jason (Jira)
Jason created FLINK-29328:
-

 Summary: 【flink在使用状态过期设置时出现问题】
 Key: FLINK-29328
 URL: https://issues.apache.org/jira/browse/FLINK-29328
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.14.3
 Environment: !报错1.jpg!!报错2.jpg!
Reporter: Jason
 Attachments: 报错1.jpg, 报错2.jpg

本人是基于flink1.14.3的版本使用时出现如下的问题,在第一次完成一个Flink作业时,添加了TTL的设置,然后启动作业后,在某一次作业出现问题自动恢复时,报如下错误,具体见附件图片,最终修复的方法是,在创建状态描述器是改变了写法,如下所示:

报错之前的写法:
{code:java}
public static final MapStateDescriptor 
quantityJudgeStateDescriptor = new MapStateDescriptor<>(
"quantityJudgeMapState",
String.class,
Integer.class); {code}
报错之后的写法:
{code:java}
public static final MapStateDescriptor 
rateAlgorithmStateProperties = new MapStateDescriptor<>(
"rateAlgorithmMapState",
TypeInformation.of(new TypeHint() {
}),
TypeInformation.of(new TypeHint() {
})
); {code}
改成之后的这种写法后,测试没有出现上述的问题,暂不知道是否是bug问题,提此问题,以追根溯源。

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27916) HybridSourceReaderTest.testReader failed with AssertionError

2022-09-18 Thread Xingbo Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606392#comment-17606392
 ] 

Xingbo Huang commented on FLINK-27916:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41106&view=logs&j=aa18c3f6-13b8-5f58-86bb-c1cffb239496&t=502fb6c0-30a2-5e49-c5c2-a00fa3acb203&l=8633

> HybridSourceReaderTest.testReader failed with AssertionError
> 
>
> Key: FLINK-27916
> URL: https://issues.apache.org/jira/browse/FLINK-27916
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Attachments: Screen Shot 2022-07-21 at 5.51.40 PM.png
>
>
> {code:java}
> 2022-06-05T07:47:33.3332158Z Jun 05 07:47:33 [ERROR] Tests run: 3, Failures: 
> 1, Errors: 0, Skipped: 0, Time elapsed: 2.03 s <<< FAILURE! - in 
> org.apache.flink.connector.base.source.hybrid.HybridSourceReaderTest
> 2022-06-05T07:47:33.3334366Z Jun 05 07:47:33 [ERROR] 
> org.apache.flink.connector.base.source.hybrid.HybridSourceReaderTest.testReader
>   Time elapsed: 0.108 s  <<< FAILURE!
> 2022-06-05T07:47:33.3335385Z Jun 05 07:47:33 java.lang.AssertionError: 
> 2022-06-05T07:47:33.3336049Z Jun 05 07:47:33 
> 2022-06-05T07:47:33.3336682Z Jun 05 07:47:33 Expected size: 1 but was: 0 in:
> 2022-06-05T07:47:33.3337316Z Jun 05 07:47:33 []
> 2022-06-05T07:47:33.3338437Z Jun 05 07:47:33  at 
> org.apache.flink.connector.base.source.hybrid.HybridSourceReaderTest.assertAndClearSourceReaderFinishedEvent(HybridSourceReaderTest.java:199)
> 2022-06-05T07:47:33.3340082Z Jun 05 07:47:33  at 
> org.apache.flink.connector.base.source.hybrid.HybridSourceReaderTest.testReader(HybridSourceReaderTest.java:96)
> 2022-06-05T07:47:33.3341373Z Jun 05 07:47:33  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-06-05T07:47:33.3342540Z Jun 05 07:47:33  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-06-05T07:47:33.3344124Z Jun 05 07:47:33  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-06-05T07:47:33.3345283Z Jun 05 07:47:33  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2022-06-05T07:47:33.3346804Z Jun 05 07:47:33  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-06-05T07:47:33.3348218Z Jun 05 07:47:33  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-06-05T07:47:33.3349495Z Jun 05 07:47:33  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-06-05T07:47:33.3350779Z Jun 05 07:47:33  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-06-05T07:47:33.3351956Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-06-05T07:47:33.3357032Z Jun 05 07:47:33  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-06-05T07:47:33.3358633Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-06-05T07:47:33.3360003Z Jun 05 07:47:33  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-06-05T07:47:33.3361924Z Jun 05 07:47:33  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-06-05T07:47:33.3363427Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-06-05T07:47:33.3364793Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-06-05T07:47:33.3365619Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-06-05T07:47:33.3366254Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-06-05T07:47:33.3366939Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-06-05T07:47:33.3367556Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-06-05T07:47:33.3368268Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-06-05T07:47:33.3369166Z Jun 05 07:47:33  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-06-05T07:47:33.3369993Z Jun 05 07:47:33  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-06-05T07:47:33.3371021Z Jun 05 07:47:33  at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> 2022-06-05T07:47:33.3372128Z Jun 05 07:47:33  at 
> org.juni

[GitHub] [flink] LadyForest commented on pull request #20652: [FLINK-22315][table] Support add/modify column/constraint/watermark for ALTER TABLE statement

2022-09-18 Thread GitBox


LadyForest commented on PR #20652:
URL: https://github.com/apache/flink/pull/20652#issuecomment-1250479200

   Hi @lsyldliu, do you have time to take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lincoln-lil commented on pull request #20679: [FLINK-28738][table-planner] Adds a user doc about the determinism in streaming

2022-09-18 Thread GitBox


lincoln-lil commented on PR #20679:
URL: https://github.com/apache/flink/pull/20679#issuecomment-1250475895

   @godfreyhe multiple runs always failed in irrelevant connector cases, the 
last one is https://issues.apache.org/jira/browse/FLINK-29315


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] HuangXingBo commented on a diff in pull request #569: Release Flink 1.14.6

2022-09-18 Thread GitBox


HuangXingBo commented on code in PR #569:
URL: https://github.com/apache/flink-web/pull/569#discussion_r973821044


##
_posts/2022-09-08-release-1.14.6.md:
##
@@ -0,0 +1,105 @@
+---
+layout: post
+title:  "Apache Flink 1.14.6 Release Announcement"
+date: 2022-09-08T00:00:00.000Z
+categories: news
+authors:
+- xingbo:
+  name: "Xingbo Huang"
+
+excerpt: The Apache Flink Community is pleased to announce another bug fix 
release for Flink 1.14.
+
+---
+
+The Apache Flink Community is pleased to announce another bug fix release for 
Flink 1.14.
+
+This release includes 34 bug fixes, vulnerability fixes and minor improvements 
for Flink 1.14.
+Below you will find a list of all bugfixes and improvements (excluding 
improvements to the build infrastructure and build stability). For a complete 
list of all changes see:
+[JIRA](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12351834).
+
+We highly recommend all users to upgrade to Flink 1.14.6.
+
+# Release Artifacts
+
+## Maven Dependencies
+
+```xml
+
+  org.apache.flink
+  flink-java
+  1.14.6
+
+
+  org.apache.flink
+  flink-streaming-java_2.11
+  1.14.6
+
+
+  org.apache.flink
+  flink-clients_2.11
+  1.14.6
+
+```
+
+## Binaries
+
+You can find the binaries on the updated [Downloads page]({{ site.baseurl 
}}/downloads.html).
+
+## Docker Images
+
+* [library/flink](https://hub.docker.com/_/flink?tab=tags&page=1&name=1.14.6) 
(official images)

Review Comment:
   Good catch. I agree that there is not need to modify the previous blog posts.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] xuyangzhong commented on pull request #20823: [FLINK-29280][table-planner] fix join hints could not be propagated in subquery

2022-09-18 Thread GitBox


xuyangzhong commented on PR #20823:
URL: https://github.com/apache/flink/pull/20823#issuecomment-1250466188

   The failed case is caused by 
https://issues.apache.org/jira/browse/FLINK-29315


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-18 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606387#comment-17606387
 ] 

xuyang commented on FLINK-29315:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=40997&view=logs&j=ab8d7049-0920-5f52-eeed-e548522c2880&t=a7d66b08-71af-51c3-2721-4b90fa4f94e3

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22   ... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22   java.lang.NullPointerException: 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-ml] yunfengzhou-hub commented on a diff in pull request #155: [FLINK-29318] Add Transformer for PolynomialExpansion

2022-09-18 Thread GitBox


yunfengzhou-hub commented on code in PR #155:
URL: https://github.com/apache/flink-ml/pull/155#discussion_r973815348


##
flink-ml-lib/src/test/java/org/apache/flink/ml/feature/PolynomialExpansionTest.java:
##
@@ -20,7 +20,7 @@
 

Review Comment:
   Let's add back the `NormalizerTest`.



##
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/polynomialexpansion/PolynomialExpansion.java:
##
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature.polynomialexpansion;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.ml.api.Transformer;
+import org.apache.flink.ml.common.datastream.TableUtils;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.linalg.SparseVector;
+import org.apache.flink.ml.linalg.Vector;
+import org.apache.flink.ml.linalg.typeinfo.VectorTypeInfo;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.commons.lang3.ArrayUtils;
+import org.apache.commons.math3.util.ArithmeticUtils;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * A Transformer that expands the input vectors in polynomial space.
+ *
+ * Take a 2-dimension vector as an example: `(x, y)`, if we want to expand 
it with degree 2, then
+ * we get `(x, x * x, y, x * y, y * y)`.
+ *
+ * For more information about the polynomial expansion, see
+ * http://en.wikipedia.org/wiki/Polynomial_expansion.
+ */
+public class PolynomialExpansion
+implements Transformer,
+PolynomialExpansionParams {
+private final Map, Object> paramMap = new HashMap<>();
+
+public PolynomialExpansion() {
+ParamUtils.initializeMapWithDefaultValues(paramMap, this);
+}
+
+@Override
+public Table[] transform(Table... inputs) {
+Preconditions.checkArgument(inputs.length == 1);
+StreamTableEnvironment tEnv =
+(StreamTableEnvironment) ((TableImpl) 
inputs[0]).getTableEnvironment();
+RowTypeInfo inputTypeInfo = 
TableUtils.getRowTypeInfo(inputs[0].getResolvedSchema());
+
+RowTypeInfo outputTypeInfo =
+new RowTypeInfo(
+ArrayUtils.addAll(inputTypeInfo.getFieldTypes(), 
VectorTypeInfo.INSTANCE),
+ArrayUtils.addAll(inputTypeInfo.getFieldNames(), 
getOutputCol()));
+
+DataStream output =
+tEnv.toDataStream(inputs[0])
+.map(
+new PolynomialExpansionFunction(getDegree(), 
getInputCol()),
+outputTypeInfo);
+
+Table outputTable = tEnv.fromDataStream(output);
+return new Table[] {outputTable};
+}
+
+@Override
+public void save(String path) throws IOException {
+ReadWriteUtils.saveMetadata(this, path);
+}
+
+public static PolynomialExpansion load(StreamTableEnvironment env, String 
path)
+throws IOException {
+return ReadWriteUtils.loadStageParam(path);
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+/** Polynomial expansion function that expands a vector in polynomial 
space. */
+private static class PolynomialExpansionFunction implements 
MapFunction {

Review Comment:
   In Spark there is a detailed comment about how this function is achieved as 
follows.
   ```java
   /**
* The expansion is done via recursion. Given n features and degree d, the 
size after expansion is
* (n + d choose d) (including 1 and first-order values). For example, let 
f([a, b, c], 3) be the
* function that expands [a, b, c] to their monomials

[GitHub] [flink] lincoln-lil commented on pull request #20800: [FLINK-28850][table-planner] Support table alias in LOOKUP hint

2022-09-18 Thread GitBox


lincoln-lil commented on PR #20800:
URL: https://github.com/apache/flink/pull/20800#issuecomment-1250441016

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lincoln-lil commented on pull request #20800: [FLINK-28850][table-planner] Support table alias in LOOKUP hint

2022-09-18 Thread GitBox


lincoln-lil commented on PR #20800:
URL: https://github.com/apache/flink/pull/20800#issuecomment-1250440721

   Succeed in pipeline: 
https://dev.azure.com/lincoln86xy/lincoln86xy/_build/results?buildId=304&view=results
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] yunfengzhou-hub commented on pull request #155: [FLINK-29318] Add Transformer for PolynomialExpansion

2022-09-18 Thread GitBox


yunfengzhou-hub commented on PR #155:
URL: https://github.com/apache/flink-ml/pull/155#issuecomment-1250437078

   Hi @weibozhao , could you please refine the description of this PR, as what 
we have done in other algorithms' PRs?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] yunfengzhou-hub commented on pull request #156: [FLINK-29323] Refine Transformer for VectorAssembler

2022-09-18 Thread GitBox


yunfengzhou-hub commented on PR #156:
URL: https://github.com/apache/flink-ml/pull/156#issuecomment-1250436451

   Hi @weibozhao , could you please refine the description and JavaDocs of this 
PR, illustrating why do we need to add the `size` parameter to VectorAssembler? 
What additional functionalities has this parameter support, and how is this 
parameter compatible with the original public API?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] bzhaoopenstack commented on pull request #20668: [WIP][FLINK-29055][k8s] add K8S options for excluding the buildin decorators

2022-09-18 Thread GitBox


bzhaoopenstack commented on PR #20668:
URL: https://github.com/apache/flink/pull/20668#issuecomment-1250434206

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29244) Add metric lastMaterializationDuration to ChangelogMaterializationMetricGroup

2022-09-18 Thread Feifan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606379#comment-17606379
 ] 

Feifan Wang commented on FLINK-29244:
-

Hi [~yuanmei] , how do you think about this ?

> Add metric lastMaterializationDuration to  ChangelogMaterializationMetricGroup
> --
>
> Key: FLINK-29244
> URL: https://issues.apache.org/jira/browse/FLINK-29244
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Feifan Wang
>Priority: Major
>
> Materialization duration can help us evaluate the efficiency of 
> materialization and the impact on the job.
>  
> How do you think about ? [~roman] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-statefun] fransking opened a new pull request, #317: FLINK-27934 - Improving inefficient deserialization/serialization of state variables within a batch

2022-09-18 Thread GitBox


fransking opened a new pull request, #317:
URL: https://github.com/apache/flink-statefun/pull/317

   Avoiding reserialisation unless the state object has actually changed and 
maintaining a deserialised backing property to avoid multiple deserialisations 
within a batch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-statefun] fransking closed pull request #315: FLINK-27934 - Inefficient deserialization/serialization of state variables within a batch

2022-09-18 Thread GitBox


fransking closed pull request #315: FLINK-27934 - Inefficient 
deserialization/serialization of state variables within a batch
URL: https://github.com/apache/flink-statefun/pull/315


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gauravmiglanid11 commented on pull request #360: [FLINK-29165][docs] update docs to create and delete deployment via code

2022-09-18 Thread GitBox


gauravmiglanid11 commented on PR #360:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/360#issuecomment-1250380177

   Hi @morhidi , changes done, please review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gauravmiglanid11 commented on a diff in pull request #360: [FLINK-29165][docs] update docs to create and delete deployment via code

2022-09-18 Thread GitBox


gauravmiglanid11 commented on code in PR #360:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/360#discussion_r973770361


##
examples/kubernetes-client-examples/src/main/java/org/apache/flink/examples/Basic.java:
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.examples;
+
+import org.apache.flink.kubernetes.operator.crd.FlinkDeployment;
+import org.apache.flink.kubernetes.operator.crd.spec.FlinkDeploymentSpec;
+import org.apache.flink.kubernetes.operator.crd.spec.FlinkVersion;
+import org.apache.flink.kubernetes.operator.crd.spec.JobManagerSpec;
+import org.apache.flink.kubernetes.operator.crd.spec.JobSpec;
+import org.apache.flink.kubernetes.operator.crd.spec.Resource;
+import org.apache.flink.kubernetes.operator.crd.spec.TaskManagerSpec;
+import org.apache.flink.kubernetes.operator.crd.spec.UpgradeMode;
+
+import io.fabric8.kubernetes.api.model.ObjectMeta;
+import io.fabric8.kubernetes.client.KubernetesClient;
+import io.fabric8.kubernetes.client.KubernetesClientException;
+
+import java.util.Map;
+import java.util.Optional;
+
+import static java.util.Map.entry;
+
+/** client code for ../basic.yaml. */
+public class Basic {
+public static void main(String[] args) {
+FlinkDeployment flinkDeployment = new FlinkDeployment();
+flinkDeployment.setApiVersion("flink.apache.org/v1beta1");
+flinkDeployment.setKind("FlinkDeployment");
+ObjectMeta objectMeta = new ObjectMeta();
+objectMeta.setName("advanced-ingress");

Review Comment:
   done



##
examples/kubernetes-client-examples/pom.xml:
##
@@ -0,0 +1,49 @@
+
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
+
+
+org.apache.flink
+flink-kubernetes-operator-parent
+1.2-SNAPSHOT
+../..
+
+
+kubernetes-client-examples
+Flink Kubernetes client code Example
+
+
+
+io.fabric8
+kubernetes-client
+5.12.3
+compile
+
+
+org.apache.flink
+flink-kubernetes-operator
+1.2-SNAPSHOT
+compile
+
+
+
+

Review Comment:
   added
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29261) Consider using FAIL_ON_UNKNOWN_PROPERTIES=false in the Operator

2022-09-18 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606342#comment-17606342
 ] 

Gyula Fora commented on FLINK-29261:


I think this would be a reasonable improvement as the correct json properties 
are already verified and guaranteed by Kubernetes itself based on the installed 
CRD. This would give us a chance to handle multiple versions flexibly

> Consider using FAIL_ON_UNKNOWN_PROPERTIES=false in the Operator
> ---
>
> Key: FLINK-29261
> URL: https://issues.apache.org/jira/browse/FLINK-29261
> Project: Flink
>  Issue Type: Bug
>Reporter: Matyas Orhidi
>Priority: Major
>
> The operator cannot be downgraded, once the CR specification is written to 
> the `status`
>  
> Caused by: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: 
> Unrecognized field "mode" (class 
> org.apache.flink.kubernetes.operator.crd.spec.FlinkDeploymentSpec), not 
> marked as ignorable (12 known properties: "restartNonce", "imagePullPolicy", 
> "ingress", "flinkConfiguration", "serviceAccount", "image", "job", 
> "podTemplate", "jobManager", "logConfiguration", "flinkVersion", 
> "taskManager"])
>  at [Source: UNKNOWN; byte offset: #UNKNOWN] (through reference chain: 
> org.apache.flink.kubernetes.operator.crd.spec.FlinkDeploymentSpec["mode"])
> at 
> com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:61)
> at 
> com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:1127)
> at 
> com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1989)
> at 
> com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1700)
> at 
> com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1678)
> at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:319)
> at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:176)
> at 
> com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322)
> at 
> com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:4650)
> at 
> com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2831)
> at 
> com.fasterxml.jackson.databind.ObjectMapper.treeToValue(ObjectMapper.java:3295)
> at 
> org.apache.flink.kubernetes.operator.reconciler.ReconciliationUtils.deserializeSpecWithMeta(ReconciliationUtils.java:288)
> ... 18 more



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29199) Support blue-green deployment type

2022-09-18 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606341#comment-17606341
 ] 

Gyula Fora commented on FLINK-29199:


This is something that could be built on top of the current logic in a custom 
way using manual savepoint triggering, forking a new CR and tracking the status 
of the new deployment before deleting the old one.

> Support blue-green deployment type
> --
>
> Key: FLINK-29199
> URL: https://issues.apache.org/jira/browse/FLINK-29199
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
> Environment: Kubernetes
>Reporter: Oleg Vorobev
>Priority: Major
>
> Are there any plans to support blue-green deployment/rollout mode similar to 
> *BlueGreen* in the 
> [flinkk8soperator|https://github.com/lyft/flinkk8soperator] to avoid downtime 
> while updating?
> The idea is to run a new version in parallel with an old one and remove the 
> old one only after the stability condition of the new one is satisfied (like 
> in 
> [rollbacks|https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.1/docs/custom-resource/job-management/#application-upgrade-rollbacks-experimental]).
> For stateful apps with {*}upgradeMode: savepoint{*}, this means: not 
> cancelling an old job after creating a savepoint -> starting new job from 
> that savepoint -> waiting for it to become running/one successful 
> checkpoint/timeout or something else -> cancelling and removing old job.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29199) Support blue-green deployment type

2022-09-18 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606340#comment-17606340
 ] 

Gyula Fora commented on FLINK-29199:


Also this will be quite tricky to implement with the current architecture and 
while using Flink Kubernetes HA. 

It would mean we have to dynamically change and juggle clusterids which are 
currently fixed and greately simplify the logic.

> Support blue-green deployment type
> --
>
> Key: FLINK-29199
> URL: https://issues.apache.org/jira/browse/FLINK-29199
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
> Environment: Kubernetes
>Reporter: Oleg Vorobev
>Priority: Major
>
> Are there any plans to support blue-green deployment/rollout mode similar to 
> *BlueGreen* in the 
> [flinkk8soperator|https://github.com/lyft/flinkk8soperator] to avoid downtime 
> while updating?
> The idea is to run a new version in parallel with an old one and remove the 
> old one only after the stability condition of the new one is satisfied (like 
> in 
> [rollbacks|https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.1/docs/custom-resource/job-management/#application-upgrade-rollbacks-experimental]).
> For stateful apps with {*}upgradeMode: savepoint{*}, this means: not 
> cancelling an old job after creating a savepoint -> starting new job from 
> that savepoint -> waiting for it to become running/one successful 
> checkpoint/timeout or something else -> cancelling and removing old job.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] gyfora commented on pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-18 Thread GitBox


gyfora commented on PR #370:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/370#issuecomment-1250366042

   > Re: [CI 
failure](https://github.com/apache/flink-kubernetes-operator/actions/runs/3069970323/jobs/4972514311),
 it looks like the Maven step completed successfully (`[INFO] BUILD SUCCESS`), 
but for some reason, the run stopped with `Error: Process completed with exit 
code 1.`. Is it a known issue?
   
   ```
   Please generate the java doc via 'mvn clean install -DskipTests 
-Pgenerate-docs' again
   Error: Process completed with exit code 1.
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-18 Thread GitBox


gyfora commented on code in PR #370:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/370#discussion_r973760343


##
flink-kubernetes-standalone/src/main/java/org/apache/flink/kubernetes/operator/kubeclient/parameters/StandaloneKubernetesJobManagerParameters.java:
##
@@ -87,4 +88,8 @@ public Boolean getAllowNonRestoredState() {
 }
 return null;
 }
+
+public Boolean isPipelineClasspathDefined() {
+return flinkConfig.contains(PipelineOptions.CLASSPATHS);
+}

Review Comment:
   I think without the `UserLibMountDecorator` it would be more cumbersome for 
the user to use the standalone mode. @usamj could probably provide more context 
here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-18 Thread GitBox


gyfora commented on code in PR #370:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/370#discussion_r973754287


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/validation/DefaultValidator.java:
##
@@ -180,10 +180,6 @@ private Optional validateJobSpec(
 return Optional.empty();
 }
 
-if (StringUtils.isNullOrWhitespaceOnly(job.getJarURI())) {
-return Optional.of("Jar URI must be defined");
-}

Review Comment:
   You removed validation of JarURI for both FlinkDeployment and 
FlinkSessionJob but at this moment the FlinkSessionJob still requires it 
(otherwise I think we might get a NPE somewhere else). This is why I thought it 
would be simplest to include the `no-op` jar changes also in this PR 
   
   cc @jeesmon 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] sap1ens commented on pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-18 Thread GitBox


sap1ens commented on PR #370:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/370#issuecomment-1250355459

   Re: [CI 
failure](https://github.com/apache/flink-kubernetes-operator/actions/runs/3069970323/jobs/4972514311),
 it looks like the Maven step completed successfully (`[INFO] BUILD SUCCESS`), 
but for some reason, the run stopped with `Error: Process completed with exit 
code 1.`. Is it a known issue?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] sap1ens commented on pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-18 Thread GitBox


sap1ens commented on PR #370:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/370#issuecomment-1250355101

   > If I understand it correctly this would in theory also solve 
https://issues.apache.org/jira/browse/FLINK-29144 as a side effect :)
   
   In a way yes, you can place multiple jars in the system classpath instead 
and not specify anything as `jarURI`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] sap1ens commented on pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-18 Thread GitBox


sap1ens commented on PR #370:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/370#issuecomment-1250354948

   > Also this does not address the SessionJobs as we discussed in the Slack 
thread with @jeesmon. 
   
   Yes, but aside from providing the "no-op" jar, which I thought is something 
that @jeesmon is working on separately, is there anything else that's missing 
here?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] sap1ens commented on a diff in pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-18 Thread GitBox


sap1ens commented on code in PR #370:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/370#discussion_r973753540


##
flink-kubernetes-standalone/src/main/java/org/apache/flink/kubernetes/operator/kubeclient/parameters/StandaloneKubernetesJobManagerParameters.java:
##
@@ -87,4 +88,8 @@ public Boolean getAllowNonRestoredState() {
 }
 return null;
 }
+
+public Boolean isPipelineClasspathDefined() {
+return flinkConfig.contains(PipelineOptions.CLASSPATHS);
+}

Review Comment:
   > Why can't we always mount the usrlib dir?
   
   Great question! Check the end of my PR description for more context. It 
turns out, when 
[StandaloneApplicationClusterEntryPoint](https://github.com/apache/flink/blob/40d50f177c8ac63057dea2c755173a0dc138ede5/flink-container/src/main/java/org/apache/flink/container/entrypoint/StandaloneApplicationClusterEntryPoint.java#L104)
 starts, it tries to find the `usrlib` folder. And if it's not null [it'll be 
used for loading the classes, not the system 
classpath](https://github.com/apache/flink/blob/40d50f177c8ac63057dea2c755173a0dc138ede5/flink-clients/src/main/java/org/apache/flink/client/program/DefaultPackagedProgramRetriever.java#L143-L147).
 So, we simply can't have a `usrlib` folder when we want to load classes from 
`/opt/flink/lib`.
   
   > What if PipelineOptions.CLASSPATHS points to a directory outside usrlib?
   
   Exactly. I don't know the context and all the use-cases, but in my mind 
`UserLibMountDecorator` shouldn't be part of the operator - let the user 
create/mount the right folder and place jars there. But deleting it in the 
context of this PR is a big change IMO, so added a check based on the comment 
on the `UserLibMountDecorator` class: `Mount the Flink User Lib directory to 
enable Flink to pick up a Jars defined in pipeline.classpaths.`. Which is think 
is misleading. WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-18 Thread GitBox


gyfora commented on code in PR #370:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/370#discussion_r973734058


##
flink-kubernetes-standalone/src/main/java/org/apache/flink/kubernetes/operator/kubeclient/parameters/StandaloneKubernetesJobManagerParameters.java:
##
@@ -87,4 +88,8 @@ public Boolean getAllowNonRestoredState() {
 }
 return null;
 }
+
+public Boolean isPipelineClasspathDefined() {
+return flinkConfig.contains(PipelineOptions.CLASSPATHS);
+}

Review Comment:
   I don't really understand this check. Why can't we always mount the usrlib 
dir? What if PipelineOptions.CLASSPATHS points to a directory outside usrlib?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-18 Thread GitBox


gyfora commented on PR #370:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/370#issuecomment-1250325337

   If I understand it correctly this would in theory also solve 
https://issues.apache.org/jira/browse/FLINK-29144 as a side effect :) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] ganlute commented on pull request #20542: [FLINK-28910][Connectors/hbase]Fix potential data deletion while updating HBase rows

2022-09-18 Thread GitBox


ganlute commented on PR #20542:
URL: https://github.com/apache/flink/pull/20542#issuecomment-1250315262

   The failure of CI seems to have nothing to do with pr.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] ganlute closed pull request #20851: [WIP][FLINK-28910][Connectors/hbase]Fix potential data deletion while updating HBase rows

2022-09-18 Thread GitBox


ganlute closed pull request #20851: [WIP][FLINK-28910][Connectors/hbase]Fix 
potential data deletion while updating HBase rows
URL: https://github.com/apache/flink/pull/20851


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lincoln-lil commented on pull request #20679: [FLINK-28738][table-planner] Adds a user doc about the determinism in streaming

2022-09-18 Thread GitBox


lincoln-lil commented on PR #20679:
URL: https://github.com/apache/flink/pull/20679#issuecomment-1250290239

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-16332) Support un-ordered mode for async lookup join

2022-09-18 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-16332.
---
Resolution: Fixed

> Support un-ordered mode for async lookup join
> -
>
> Key: FLINK-16332
> URL: https://issues.apache.org/jira/browse/FLINK-16332
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Reporter: Jark Wu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Currently, we only support "ordered" mode for async lookup join.  Because the 
> ordering in streaming SQL is very important, the accumulate and retract 
> messages shoudl be in ordered. If messages are out of order, the result will 
> be wrong. 
> The "un-ordered" mode can be enabled by a job configuration. But it will be a 
> prefered option. Only if it doesn't affect the order of acc/retract messages 
> (e.g. it is just an append-only stream), the "un-ordered" mode will be 
> enabled. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-18871) Non-deterministic function could break retract mechanism

2022-09-18 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-18871.
---
Fix Version/s: 1.16.0
   Resolution: Fixed

The non-deterministic function exists in changelog pipeline can be validated if 
it affect correctness since 1.16.0

> Non-deterministic function could break retract mechanism
> 
>
> Key: FLINK-18871
> URL: https://issues.apache.org/jira/browse/FLINK-18871
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: Benchao Li
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
> Fix For: 1.16.0
>
>
> For example, we have a following SQL:
> {code:sql}
> create view view1 as
> select 
>   max(a) as m1,
>   max(b) as m2 -- b is a timestmap
> from T
> group by c, d;
> create view view2 as
> select * from view1
> where m2 > CURRENT_TIMESTAMP;
> insert into MySink
> select sum(m1) as m1 
> from view2
> group by c;
> {code}
> view1 will produce retract messages, and the same message in view2 maybe 
> produce different results. and the second agg will produce wrong result.
>  For example,
> {noformat}
> view1:
> + (1, 2020-8-10 16:13:00)
> - (1, 2020-8-10 16:13:00)
> + (2, 2020-8-10 16:13:10)
> view2:
> + (1, 2020-8-10 16:13:00)
> - (1, 2020-8-10 16:13:00) // this record may be filtered out
> + (2, 2020-8-10 16:13:10)
> MySink:
> + (1, 2020-8-10 16:13:00)
> + (2, 2020-8-10 16:13:10) // will produce wrong result.
> {noformat}
> In general, the non-deterministic function may break the retract mechanism. 
> All operators downstream which will rely on the retraction mechanism will 
> produce wrong results, or throw exception, such as Agg / some Sink which need 
> retract message / TopN / Window.
> (The example above is a simplified version of some production jobs in our 
> scenario, just to explain the core problem)
> CC [~ykt836] [~jark]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-24412) retract stream join on topN error

2022-09-18 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-24412.
---
Fix Version/s: 1.16.0
   Resolution: Fixed

> retract  stream  join  on topN   error
> --
>
> Key: FLINK-24412
> URL: https://issues.apache.org/jira/browse/FLINK-24412
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.3
>Reporter: sandy du
>Priority: Critical
> Fix For: 1.16.0
>
>
> I can  reappear this error in follow sql:
>  create table user_info(
>  name string,
>  age int,
>  primary key(name) not enforced
>  ) whith(
>  'connector'='jdbc',
>  'url'='jdbc:mysql...',
>  ...
>  'lookup.cache.max-rows'='0',
>  'lookup.cache.ttl'='1 s'
>  );
> create table user_action(
>  name string,
>  app string,
>  dt string,
>  proctime as proctime()
>  )whith(
>  'connector'='kafka',
>  ...
>  );
> create view v_user_action as select * from(
>  select name,app,proctime,row_number() over(partition by name,app order by dt 
> desc) as rn from user_action
>  )t where rn=1;
> create view user_out as select a.name,a.app,b.age from v_user_action a left 
> join user_info
>  for system_time as of a.proctime as b on a.name=b.name;
> select * from (
>  select name,app,age ,row_number() over(partition by name,app order by age 
> desc) as rn from user_out
>  ) t where rn=1;
>   
>  *first :*
>  {color:#de350b} user_action  got data  
> \{"name":"11","app":"app","dt":"2021-09-10"}{color}
> {color:#de350b}user_info   got data  \{"name":"11","age":11}{color}
> at the moment  sql can  successful run.
> *{color:#de350b}then :{color}*
> {color:#de350b}user_action  got data  
> \{"name":"11","app":"app","dt":"2021-09-20"}{color}
> {color:#de350b}user_info   got data  \{"name":"11","age":11}  
> \{"name":"11","age":22} {color}
> now, TopN query on last sql, the TopN operator will thrown exception: 
> {{Caused by: java.lang.RuntimeException: Can not retract a non-existent 
> record. This should never happen.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] lincoln-lil commented on pull request #20679: [FLINK-28738][table-planner] Adds a user doc about the determinism in streaming

2022-09-18 Thread GitBox


lincoln-lil commented on PR #20679:
URL: https://github.com/apache/flink/pull/20679#issuecomment-1250244597

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-29283) Remove hardcoded apiVersion from operator unit test

2022-09-18 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-29283.
--
Resolution: Won't Fix

> Remove hardcoded apiVersion from operator unit test
> ---
>
> Key: FLINK-29283
> URL: https://issues.apache.org/jira/browse/FLINK-29283
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Tony Garrard
>Priority: Minor
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.2.0
>
>
> The unit test 
> flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/utils/ReconciliationUtilsTest.java
>  has a hardcoded apiVersion. To facilitate modifications, it should be using 
> the constants provided in the class CrdConstants i.e. 
> assertEquals(API_GROUP + "/" + API_VERSION, 
> internalMeta.get("apiVersion").asText());
> instead of "flink.apache.org/v1beta1"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] gyfora closed pull request #367: [FLINK-29283] Remove hardcoded apiVersion from operator unit test

2022-09-18 Thread GitBox


gyfora closed pull request #367: [FLINK-29283] Remove hardcoded apiVersion from 
operator unit test
URL: https://github.com/apache/flink-kubernetes-operator/pull/367


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on pull request #367: [FLINK-29283] Remove hardcoded apiVersion from operator unit test

2022-09-18 Thread GitBox


gyfora commented on PR #367:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/367#issuecomment-1250236032

   I am closing this PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] tisonkun commented on a diff in pull request #20584: [FLINK-26182][Connector/pulsar] Create a e2e tests for Pulsar sink.

2022-09-18 Thread GitBox


tisonkun commented on code in PR #20584:
URL: https://github.com/apache/flink/pull/20584#discussion_r973693195


##
flink-end-to-end-tests/flink-end-to-end-tests-pulsar/src/test/java/org/apache/flink/tests/util/pulsar/PulsarSourceUnorderedE2ECase.java:
##
@@ -19,21 +19,26 @@
 package org.apache.flink.tests.util.pulsar;
 
 import org.apache.flink.connector.pulsar.testutils.PulsarTestContextFactory;
+import 
org.apache.flink.connector.pulsar.testutils.source.UnorderedSourceTestSuiteBase;
+import 
org.apache.flink.connector.pulsar.testutils.source.cases.KeySharedSubscriptionContext;
+import 
org.apache.flink.connector.pulsar.testutils.source.cases.SharedSubscriptionContext;
 import org.apache.flink.connector.testframe.junit.annotations.TestContext;
 import org.apache.flink.connector.testframe.junit.annotations.TestEnv;
 import 
org.apache.flink.connector.testframe.junit.annotations.TestExternalSystem;
 import org.apache.flink.connector.testframe.junit.annotations.TestSemantics;
 import org.apache.flink.streaming.api.CheckpointingMode;
-import org.apache.flink.tests.util.pulsar.cases.KeySharedSubscriptionContext;
-import org.apache.flink.tests.util.pulsar.cases.SharedSubscriptionContext;
 import 
org.apache.flink.tests.util.pulsar.common.FlinkContainerWithPulsarEnvironment;
 import 
org.apache.flink.tests.util.pulsar.common.PulsarContainerTestEnvironment;
-import org.apache.flink.tests.util.pulsar.common.UnorderedSourceTestSuiteBase;
+import org.apache.flink.testutils.junit.FailsOnJava11;
+
+import org.junit.experimental.categories.Category;
 
 /**
  * Pulsar E2E test based on connector testing framework. It's used for Shared 
& Key_Shared
  * subscription.
  */
+@SuppressWarnings("unused")
+@Category(value = {FailsOnJava11.class})

Review Comment:
   Do you have an investigation for this failure? It seems we ignore some test 
cases.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] godfreyhe commented on pull request #20679: [FLINK-28738][table-planner] Adds a user doc about the determinism in streaming

2022-09-18 Thread GitBox


godfreyhe commented on PR #20679:
URL: https://github.com/apache/flink/pull/20679#issuecomment-1250228642

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org