Re: [PR] [FLINK-27986] Refactor the name of finish method for JdbcOutputFormatBuilder [flink]

2024-01-17 Thread via GitHub


xleoken closed pull request #18794: [FLINK-27986] Refactor the name of finish 
method for JdbcOutputFormatBuilder
URL: https://github.com/apache/flink/pull/18794


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21949][table] Support ARRAY_AGG aggregate function [flink]

2024-01-17 Thread via GitHub


Jiabao-Sun commented on PR #23411:
URL: https://github.com/apache/flink/pull/23411#issuecomment-1897966163

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32514] Support configuring checkpointing interval during process backlog [flink]

2024-01-17 Thread via GitHub


XComp commented on code in PR #22931:
URL: https://github.com/apache/flink/pull/22931#discussion_r1457044758


##
flink-runtime/src/main/java/org/apache/flink/runtime/operators/coordination/OperatorCoordinatorHolder.java:
##
@@ -143,12 +144,14 @@ private OperatorCoordinatorHolder(
 
 public void lazyInitialize(
 GlobalFailureHandler globalFailureHandler,
-ComponentMainThreadExecutor mainThreadExecutor) {
+ComponentMainThreadExecutor mainThreadExecutor,
+@Nullable CheckpointCoordinator checkpointCoordinator) {
 
 this.globalFailureHandler = globalFailureHandler;
 this.mainThreadExecutor = mainThreadExecutor;
+context.lazyInitialize(globalFailureHandler, mainThreadExecutor, 
checkpointCoordinator);
 
-context.lazyInitialize(globalFailureHandler, mainThreadExecutor);
+context.lazyInitialize(globalFailureHandler, mainThreadExecutor, 
checkpointCoordinator);

Review Comment:
   Thanks for clarification. Doing it as a hotfix commit in one other PR makes 
sense because of the reasons you mentioned. :+1: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33717) Cleanup the usage of deprecated StreamTableEnvironment#fromDataStream

2024-01-17 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808053#comment-17808053
 ] 

Jiabao Sun commented on FLINK-33717:


Hi [~jackylau], are you still working on this?
Could I help finished this task?

> Cleanup the usage of deprecated StreamTableEnvironment#fromDataStream
> -
>
> Key: FLINK-33717
> URL: https://issues.apache.org/jira/browse/FLINK-33717
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jane Chan
>Assignee: Jacky Lau
>Priority: Major
>
> {code:java}
> PythonScalarFunctionOperatorTestBase
> AvroTypesITCase {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34090][core] Introduce SerializerConfig [flink]

2024-01-17 Thread via GitHub


X-czh commented on code in PR #24127:
URL: https://github.com/apache/flink/pull/24127#discussion_r1457027155


##
flink-core/src/main/java/org/apache/flink/api/common/serialization/SerializerConfig.java:
##
@@ -0,0 +1,413 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.common.serialization;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.typeinfo.TypeInfoFactory;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.ConfigurationUtils;
+import org.apache.flink.configuration.PipelineOptions;
+import org.apache.flink.configuration.ReadableConfig;
+
+import com.esotericsoftware.kryo.Serializer;
+
+import java.io.Serializable;
+import java.util.LinkedHashMap;
+import java.util.LinkedHashSet;
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+/**
+ * A config to define the behavior for serializers in Flink job, it manages 
the registered types and
+ * serializers. The config is created from job configuration and used by Flink 
to create serializers
+ * for data types.
+ */
+@PublicEvolving
+public final class SerializerConfig implements Serializable {
+
+private static final long serialVersionUID = 1L;
+
+/**
+ * In the long run, this field should be somehow merged with the {@link 
Configuration} from
+ * StreamExecutionEnvironment.
+ */
+private final Configuration configuration = new Configuration();

Review Comment:
   Updated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33634] Add Conditions to Flink CRD's Status field [flink-kubernetes-operator]

2024-01-17 Thread via GitHub


lajith2006 commented on PR #749:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/749#issuecomment-1897888444

   Sure, I will open FLIP with design. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34049][table]Refactor classes related to window TVF aggregation to prepare for non-aligned windows [flink]

2024-01-17 Thread via GitHub


xuyangzhong commented on PR #24068:
URL: https://github.com/apache/flink/pull/24068#issuecomment-1897862786

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34141) Bash exited with code '143'

2024-01-17 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34141:
---
Description: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56544=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901]

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56503=logs=66645748-20ed-5f80-dbdf-bb5906c15462=e32bdfab-58bb-53ea-d411-d67a54d2939f

 

> Bash exited with code '143'
> ---
>
> Key: FLINK-34141
> URL: https://issues.apache.org/jira/browse/FLINK-34141
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: xuyang
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56544=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901]
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56503=logs=66645748-20ed-5f80-dbdf-bb5906c15462=e32bdfab-58bb-53ea-d411-d67a54d2939f
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34141) Bash exited with code '143'

2024-01-17 Thread xuyang (Jira)
xuyang created FLINK-34141:
--

 Summary: Bash exited with code '143'
 Key: FLINK-34141
 URL: https://issues.apache.org/jira/browse/FLINK-34141
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Reporter: xuyang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34090][core] Introduce SerializerConfig [flink]

2024-01-17 Thread via GitHub


X-czh commented on code in PR #24127:
URL: https://github.com/apache/flink/pull/24127#discussion_r1456946751


##
flink-core/src/main/java/org/apache/flink/api/common/serialization/SerializerConfig.java:
##
@@ -0,0 +1,413 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.common.serialization;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.typeinfo.TypeInfoFactory;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.ConfigurationUtils;
+import org.apache.flink.configuration.PipelineOptions;
+import org.apache.flink.configuration.ReadableConfig;
+
+import com.esotericsoftware.kryo.Serializer;
+
+import java.io.Serializable;
+import java.util.LinkedHashMap;
+import java.util.LinkedHashSet;
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+/**
+ * A config to define the behavior for serializers in Flink job, it manages 
the registered types and
+ * serializers. The config is created from job configuration and used by Flink 
to create serializers
+ * for data types.
+ */
+@PublicEvolving
+public final class SerializerConfig implements Serializable {
+
+private static final long serialVersionUID = 1L;
+
+/**
+ * In the long run, this field should be somehow merged with the {@link 
Configuration} from
+ * StreamExecutionEnvironment.
+ */
+private final Configuration configuration = new Configuration();

Review Comment:
   Yes, it's possible to use fields. We just need to introduce a default 
construct that populates the default values of the fields with the default 
Configuration object.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34090][core] Introduce SerializerConfig [flink]

2024-01-17 Thread via GitHub


reswqa commented on code in PR #24127:
URL: https://github.com/apache/flink/pull/24127#discussion_r1456941270


##
flink-core/src/main/java/org/apache/flink/api/common/serialization/SerializerConfig.java:
##
@@ -0,0 +1,413 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.common.serialization;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.typeinfo.TypeInfoFactory;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.ConfigurationUtils;
+import org.apache.flink.configuration.PipelineOptions;
+import org.apache.flink.configuration.ReadableConfig;
+
+import com.esotericsoftware.kryo.Serializer;
+
+import java.io.Serializable;
+import java.util.LinkedHashMap;
+import java.util.LinkedHashSet;
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+/**
+ * A config to define the behavior for serializers in Flink job, it manages 
the registered types and
+ * serializers. The config is created from job configuration and used by Flink 
to create serializers
+ * for data types.
+ */
+@PublicEvolving
+public final class SerializerConfig implements Serializable {
+
+private static final long serialVersionUID = 1L;
+
+/**
+ * In the long run, this field should be somehow merged with the {@link 
Configuration} from
+ * StreamExecutionEnvironment.
+ */
+private final Configuration configuration = new Configuration();

Review Comment:
   Do we have to introduce a `Configuration` here? Can we use fields instead.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34129) MiniBatchGlobalGroupAggFunction will make -D as +I then make +I as -U when state expired

2024-01-17 Thread Hongshun Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808018#comment-17808018
 ] 

Hongshun Wang commented on FLINK-34129:
---

[~fsk119] ,[~lsy] , [~andrewlinc...@gmail.com] , CC

> MiniBatchGlobalGroupAggFunction will make -D as +I then make +I as -U when 
> state expired 
> -
>
> Key: FLINK-34129
> URL: https://issues.apache.org/jira/browse/FLINK-34129
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.1
>Reporter: Hongshun Wang
>Priority: Major
> Fix For: 1.19.0
>
>
> Take sum for example:
> When state is expired, then an update operation from source happens. 
> MiniBatchGlobalGroupAggFunction take -U[1, 20] and +U[1, 20] as input, but 
> will emit +I[1, -20] and -D[1, -20]. The sink will detele the data from 
> external database.
> Let's see why this will happens:
>  * when state is expired and -U[1, 20] arrive, 
> MiniBatchGlobalGroupAggFunction will create a new sum accumulator and set 
> firstRow as true.
> {code:java}
> if (stateAcc == null) { 
>     stateAcc = globalAgg.createAccumulators(); 
>     firstRow = true; 
> }   {code}
>  * then sum accumulator will retract sum value as -20
>  * As the first row, MiniBatchGlobalGroupAggFunction will change -U as +I, 
> then emit to downstream.
> {code:java}
> if (!recordCounter.recordCountIsZero(acc)) {
>    // if this was not the first row and we have to emit retractions
>     if (!firstRow) {
>        // ignore
>     } else {
>     // update acc to state
>     accState.update(acc);
>  
>    // this is the first, output new result
>    // prepare INSERT message for new row
>    resultRow.replace(currentKey, newAggValue).setRowKind(RowKind.INSERT);
>    out.collect(resultRow);
> }  {code}
>  * when next +U[1, 20] arrives, sum accumulator will retract sum value as 0, 
> so RetractionRecordCounter#recordCountIsZero will return true. Because 
> firstRow = false now, will change the +U as -D, then emit to downtream.
> {code:java}
> if (!recordCounter.recordCountIsZero(acc)) {
>     // ignode
> }else{
>    // we retracted the last record for this key
>    // if this is not first row sent out a DELETE message
>    if (!firstRow) {
>    // prepare DELETE message for previous row
>    resultRow.replace(currentKey, prevAggValue).setRowKind(RowKind.DELETE);
>    out.collect(resultRow);
> } {code}
>  
> So the sink will receiver +I and -D after a source update operation, the data 
> will be delete.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34049][table]Refactor classes related to window TVF aggregation to prepare for non-aligned windows [flink]

2024-01-17 Thread via GitHub


xuyangzhong commented on PR #24068:
URL: https://github.com/apache/flink/pull/24068#issuecomment-1897831028

   Hi, @fsk119 . Thank you for your review. 
   I think some of the questions you raised are valuable and meaningful. I have 
created several independent JIRA to follow up and further optimize the old code 
that was previously implemented. For this PR, the changes are purely 
refactoring. No changes to the nature of the original implementation will be 
introduced.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34090][core] Introduce SerializerConfig [flink]

2024-01-17 Thread via GitHub


flinkbot commented on PR #24127:
URL: https://github.com/apache/flink/pull/24127#issuecomment-1897829521

   
   ## CI report:
   
   * 5dacb13c87a7a1a8b957d1d93d35ff481a360cd5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33928][table-planner] Should not throw exception while creating view with specify field names [flink]

2024-01-17 Thread via GitHub


swuferhong commented on PR #24096:
URL: https://github.com/apache/flink/pull/24096#issuecomment-1897827367

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34090) Introduce SerializerConfig for serialization

2024-01-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34090:
---
Labels: pull-request-available  (was: )

> Introduce SerializerConfig for serialization
> 
>
> Key: FLINK-34090
> URL: https://issues.apache.org/jira/browse/FLINK-34090
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Affects Versions: 1.19.0
>Reporter: Fang Yong
>Assignee: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34090][core] Introduce SerializerConfig [flink]

2024-01-17 Thread via GitHub


X-czh opened a new pull request, #24127:
URL: https://github.com/apache/flink/pull/24127

   
   
   ## What is the purpose of the change
   
   Introduce SerializerConfig for serializers decouple the serializer from 
ExecutionConfig.
   
   ## Brief change log
   
   Introduce SerializerConfig and wire serializer-related methods in 
ExecutionConfig to it.
   
   ## Verifying this change
   
   This change is already covered by existing tests: ExecutionConfigTest.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no) no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no) yes
 - The serializers: (yes / no / don't know) no
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know) no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) 
no
 - The S3 file system connector: (yes / no / don't know) no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no) no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented) not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34105) Akka timeout happens in TPC-DS benchmarks

2024-01-17 Thread dizhou cao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808008#comment-17808008
 ] 

dizhou cao commented on FLINK-34105:


[~zhuzh] Great suggestion. We've tried to move the serialization of 
shuffleDescriptor logic on the JobMaster side to the futureExecutor for 
asynchronous serialization. Moving the deserialization to a separate thread 
pool on the TM side would disrupt the original synchronous logic of the 
submission interface, potentially introducing additional risks. Not 
implementing serialization modifications on the TM side results in about a 30% 
performance degradation in our tests under OLAP scenarios. Furthermore, we plan 
to advance batch submission optimizations for Task submission stages in OLAP 
scenarios. We intend to test the asynchronous serialization optimization 
internally, as its performance is roughly consistent with placing it in the 
Akka remote thread pool. Therefore, for this fix, we plan to move the 
serialization operation on the Jobmaster side to an asynchronous thread pool, 
while keeping the deserialization on the TM side back on the main thread. WDYT?

> Akka timeout happens in TPC-DS benchmarks
> -
>
> Key: FLINK-34105
> URL: https://issues.apache.org/jira/browse/FLINK-34105
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Zhu Zhu
>Assignee: Yangze Guo
>Priority: Critical
> Attachments: image-2024-01-16-13-59-45-556.png
>
>
> We noticed akka timeout happens in 10TB TPC-DS benchmarks in 1.19. The 
> problem did not happen in 1.18.0.
> After bisecting, we find the problem was introduced in FLINK-33532.
>  !image-2024-01-16-13-59-45-556.png|width=800! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33365] include filters with Lookup joins [flink-connector-jdbc]

2024-01-17 Thread via GitHub


libenchao commented on PR #79:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/79#issuecomment-1897803207

   @davidradl Thanks for the work. I think the 
`testSelectStatementWithWeirdCharacters` you added is an orthogonal problem 
with current one, it fails without current changes, right? If yes, it can be 
another separate issue which does not block current one.
   
   Current issue is mainly about how to let JDBC lookup function handle the 
pushed predicates, previously it just ignored them. So one test case I would 
like to see is one ITCase test, which shows that the result is correct with 
predicates pushed down.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33182][table] Allow metadata columns in Ndu-analyze with ChangelogNormalize [flink]

2024-01-17 Thread via GitHub


lincoln-lil commented on PR #24121:
URL: https://github.com/apache/flink/pull/24121#issuecomment-1897800621

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33434][runtime-web] Support invoke async-profiler on TaskManager via REST API [flink]

2024-01-17 Thread via GitHub


yuchen-ecnu commented on PR #24041:
URL: https://github.com/apache/flink/pull/24041#issuecomment-1897790800

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-31449) Remove DeclarativeSlotManager related logic

2024-01-17 Thread Weihua Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weihua Hu resolved FLINK-31449.
---
Resolution: Fixed

resolved in master: 551b776a82193eeb6bb4a9b8a6925a386ea502e4

> Remove DeclarativeSlotManager related logic
> ---
>
> Key: FLINK-31449
> URL: https://issues.apache.org/jira/browse/FLINK-31449
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination
>Reporter: !huwh
>Assignee: Weihua Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The DeclarativeSlotManager and related configs will be completely removed in 
> the next release after the default SlotManager change to 
> FineGrainedSlotManager.
>  
> We should do the job in 1.19 version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-31449][resourcemanager] Remove DeclarativeSlotManager related logic [flink]

2024-01-17 Thread via GitHub


huwh merged PR #24102:
URL: https://github.com/apache/flink/pull/24102


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34049][table]Refactor classes related to window TVF aggregation to prepare for non-aligned windows [flink]

2024-01-17 Thread via GitHub


xuyangzhong commented on code in PR #24068:
URL: https://github.com/apache/flink/pull/24068#discussion_r1456843664


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/window/windowtvf/common/WindowAssigner.java:
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.operators.window.windowtvf.common;
+
+import 
org.apache.flink.table.runtime.operators.window.groupwindow.assigners.GroupWindowAssigner;
+
+import java.io.Serializable;
+
+/**
+ * WindowAssigner is used to assign windows to elements.
+ *
+ * The differences between {@link WindowAssigner} and {@link 
GroupWindowAssigner} is that, this
+ * window assigner is translated from the new window TVF syntax, but the other 
is from the legacy
+ * GROUP WINDOW FUNCTION syntax. In the long future, {@link 
GroupWindowAssigner} will be dropped.
+ *
+ * See more details in {@link AbstractWindowOperator}.
+ *
+ * TODO support UnsliceAssigner.
+ */
+public interface WindowAssigner extends Serializable {
+
+/**
+ * Returns {@code true} if elements are assigned to windows based on event 
time, {@code false}
+ * based on processing time.
+ */
+boolean isEventTime();

Review Comment:
   Agree it! I create a new JIRA for it. 
https://issues.apache.org/jira/browse/FLINK-34139.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34049][table]Refactor classes related to window TVF aggregation to prepare for non-aligned windows [flink]

2024-01-17 Thread via GitHub


xuyangzhong commented on code in PR #24068:
URL: https://github.com/apache/flink/pull/24068#discussion_r1456843334


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/window/groupwindow/context/WindowContext.java:
##
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.operators.window.groupwindow.context;
+
+import org.apache.flink.api.common.state.State;
+import org.apache.flink.api.common.state.StateDescriptor;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.runtime.operators.window.Window;
+import 
org.apache.flink.table.runtime.operators.window.groupwindow.internal.InternalWindowProcessFunction;
+import 
org.apache.flink.table.runtime.operators.window.groupwindow.triggers.Trigger;
+
+import java.time.ZoneId;
+import java.util.Collection;
+
+/** A context contains some information used for {@link 
InternalWindowProcessFunction}. */
+public interface WindowContext {
+
+/**
+ * Creates a partitioned state handle, using the state backend configured 
for this task.
+ *
+ * @throws IllegalStateException Thrown, if the key/value state was 
already initialized.
+ * @throws Exception Thrown, if the state backend cannot create the 
key/value state.
+ */
+ S getPartitionedState(StateDescriptor 
stateDescriptor) throws Exception;
+
+/** @return current key of current processed element. */
+K currentKey();
+
+/** Returns the current processing time. */
+long currentProcessingTime();
+
+/** Returns the current event-time watermark. */
+long currentWatermark();
+
+/** Returns the shifted timezone of the window. */
+ZoneId getShiftTimeZone();
+
+/** Gets the accumulators of the given window. */
+RowData getWindowAccumulators(W window) throws Exception;
+
+/** Sets the accumulators of the given window. */
+void setWindowAccumulators(W window, RowData acc) throws Exception;
+
+/** Clear window state of the given window. */
+void clearWindowState(W window) throws Exception;
+
+/** Clear previous agg state (used for retraction) of the given window. */
+void clearPreviousState(W window) throws Exception;
+
+/** Call {@link Trigger#clear(Window)}} on trigger. */
+void clearTrigger(W window) throws Exception;

Review Comment:
   Agree! I create a new JIRA for it. 
https://issues.apache.org/jira/browse/FLINK-34140.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34049][table]Refactor classes related to window TVF aggregation to prepare for non-aligned windows [flink]

2024-01-17 Thread via GitHub


xuyangzhong commented on code in PR #24068:
URL: https://github.com/apache/flink/pull/24068#discussion_r1456843334


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/window/groupwindow/context/WindowContext.java:
##
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.operators.window.groupwindow.context;
+
+import org.apache.flink.api.common.state.State;
+import org.apache.flink.api.common.state.StateDescriptor;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.runtime.operators.window.Window;
+import 
org.apache.flink.table.runtime.operators.window.groupwindow.internal.InternalWindowProcessFunction;
+import 
org.apache.flink.table.runtime.operators.window.groupwindow.triggers.Trigger;
+
+import java.time.ZoneId;
+import java.util.Collection;
+
+/** A context contains some information used for {@link 
InternalWindowProcessFunction}. */
+public interface WindowContext {
+
+/**
+ * Creates a partitioned state handle, using the state backend configured 
for this task.
+ *
+ * @throws IllegalStateException Thrown, if the key/value state was 
already initialized.
+ * @throws Exception Thrown, if the state backend cannot create the 
key/value state.
+ */
+ S getPartitionedState(StateDescriptor 
stateDescriptor) throws Exception;
+
+/** @return current key of current processed element. */
+K currentKey();
+
+/** Returns the current processing time. */
+long currentProcessingTime();
+
+/** Returns the current event-time watermark. */
+long currentWatermark();
+
+/** Returns the shifted timezone of the window. */
+ZoneId getShiftTimeZone();
+
+/** Gets the accumulators of the given window. */
+RowData getWindowAccumulators(W window) throws Exception;
+
+/** Sets the accumulators of the given window. */
+void setWindowAccumulators(W window, RowData acc) throws Exception;
+
+/** Clear window state of the given window. */
+void clearWindowState(W window) throws Exception;
+
+/** Clear previous agg state (used for retraction) of the given window. */
+void clearPreviousState(W window) throws Exception;
+
+/** Call {@link Trigger#clear(Window)}} on trigger. */
+void clearTrigger(W window) throws Exception;

Review Comment:
   Agree! I create a new JIRA for it. 
https://issues.apache.org/jira/browse/FLINK-34139.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34049][table]Refactor classes related to window TVF aggregation to prepare for non-aligned windows [flink]

2024-01-17 Thread via GitHub


xuyangzhong commented on code in PR #24068:
URL: https://github.com/apache/flink/pull/24068#discussion_r1456842979


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/window/groupwindow/context/WindowContext.java:
##
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.operators.window.groupwindow.context;
+
+import org.apache.flink.api.common.state.State;
+import org.apache.flink.api.common.state.StateDescriptor;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.runtime.operators.window.Window;
+import 
org.apache.flink.table.runtime.operators.window.groupwindow.internal.InternalWindowProcessFunction;
+import 
org.apache.flink.table.runtime.operators.window.groupwindow.triggers.Trigger;
+
+import java.time.ZoneId;
+import java.util.Collection;
+
+/** A context contains some information used for {@link 
InternalWindowProcessFunction}. */
+public interface WindowContext {

Review Comment:
   I'm afraid so, as in some places where the window context is used (such as 
InternalWindowProcessFunction), a more generic type parameter K is employed 
instead of the specific RowData type. I believe maintaining the most generic 
key type is not necessarily a bad thing.
   Of course, if you think it needs to be changed to the specific RowData type, 
I can introduce a separate JIRA to address it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-30656) Provide more logs for schema compatibility check

2024-01-17 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807968#comment-17807968
 ] 

Hangxiang Yu edited comment on FLINK-30656 at 1/18/24 3:57 AM:
---

We should support to remain some messages for TypeSerializerSchemaCompatibility 
just like SchemaCompatibility in Avro.

Then every TypeSerializer could defined their own message about compatibility.

I have two proposals:

1. Add new method called TypeSerializerSchemaCompatibility#incompatible and 
#compatibleAfterMigration to support message, e.g. 
TypeSerializerSchemaCompatibility#incompatible(String message). And deprecated 
related old methods.
{code:java}
public static  TypeSerializerSchemaCompatibility incompatible(String 
message) {
return new TypeSerializerSchemaCompatibility<>(Type.INCOMPATIBLE, message, 
null);
} {code}
2. Add a new method called TypeSerializerSchemaCompatibility#withMessage:

 
{code:java}
private TypeSerializerSchemaCompatibility withMessage(String message) {
this.message = message;
return this;
} {code}
Proposal 1 behaves just like SchemaCompatibility in Avro who forces caller to 
add message. But since TypeSerializerSchemaCompatibility is a PublicEvolving 
API, maybe we need a FLIP firstly?
Proposal 2 just add a new method so that we will not break change, but every 
callers (including some custom-defined TypeSerializers) should call it manually 
because it will not fail when compile.
[~leonard] [~Weijie Guo] WDYT?
 


was (Author: masteryhx):
We should support to remain some messages for TypeSerializerSchemaCompatibility 
just like SchemaCompatibility in Avro.

Then every TypeSerializer could defined their own message about compatibility.

I have two proposals:

1. Add new method called TypeSerializerSchemaCompatibility#incompatible and 
#compatibleAfterMigration to support message, e.g. 
TypeSerializerSchemaCompatibility#incompatible(String message). And deprecated 
related old methods.
{code:java}
public static  TypeSerializerSchemaCompatibility incompatible(String 
message) {
return new TypeSerializerSchemaCompatibility<>(Type.INCOMPATIBLE, message, 
null);
} {code}
2. Add a new method called TypeSerializerSchemaCompatibility#withMessage:

 
{code:java}
private TypeSerializerSchemaCompatibility withMessage(String message) {
this.message = message;
return this;
} {code}
Proposal 1 behaves just like SchemaCompatibility in Avro who forces caller to 
add message. But since TypeSerializerSchemaCompatibility is a PublicEvolving 
API, maybe we need a FLIP firstly?
Proposal 2 just add a new method so that we will not break change, but every 
callers (including some custom-defined TypeSerializers) should call it manually 
because it will not fail when compile.
[~leonard] WDYT?
 

> Provide more logs for schema compatibility check
> 
>
> Key: FLINK-30656
> URL: https://issues.apache.org/jira/browse/FLINK-30656
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Hangxiang Yu
>Assignee: Hangxiang Yu
>Priority: Major
>
> Currently, we have very few logs and exception info when checking schema 
> compatibility.
> It's difficult to see why the compatibility is not compatible, especially for 
> some complicated nested serializers.
> For example, for map serializer, when it's not compatible, we may only see 
> below without other information:
> {code:java}
> Caused by: org.apache.flink.util.StateMigrationException: The new state 
> serializer 
> (org.apache.flink.api.common.typeutils.base.MapSerializer@e95e076a) must not 
> be incompatible with the old state serializer 
> (org.apache.flink.api.common.typeutils.base.MapSerializer@c33b100f). {code}
> So I think we could add more infos when checking the compatibility.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34049][table]Refactor classes related to window TVF aggregation to prepare for non-aligned windows [flink]

2024-01-17 Thread via GitHub


xuyangzhong commented on code in PR #24068:
URL: https://github.com/apache/flink/pull/24068#discussion_r1456822860


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/window/MergeCallback.java:
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.operators.window;
+
+import javax.annotation.Nullable;
+
+/**
+ * Callback to be used in when merging slices or windows for specifying which 
slices or windows
+ * should be merged.
+ *
+ * @param  The type {@link Window} for windows or the type {@link Long} for 
slices that this
+ * callback used to merge.
+ * @param  The result type like {@link java.util.Collection} or {@link 
Iterable} to specify which
+ * slices or windows should be merged. TODO use {@link 
java.util.Collection} uniformly.
+ */
+public interface MergeCallback {
+
+/**
+ * Specifies that states of the given windows or slices should be merged 
into the result window
+ * or slice.
+ *
+ * @param mergeResult The resulting merged window or slice, {@code null} 
if it represents a
+ * non-state namespace.
+ * @param toBeMerged Windows or slices that should be merged into one 
window or slice.
+ */
+void merge(@Nullable W mergeResult, R toBeMerged) throws Exception;

Review Comment:
   Agree with you. I create an extra JIRA for this. 
https://issues.apache.org/jira/browse/FLINK-34138



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34049][table]Refactor classes related to window TVF aggregation to prepare for non-aligned windows [flink]

2024-01-17 Thread via GitHub


xuyangzhong commented on code in PR #24068:
URL: https://github.com/apache/flink/pull/24068#discussion_r1456817705


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/aggregate/window/builder/AbstractWindowAggOperatorBuilder.java:
##
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.operators.aggregate.window.builder;
+
+import org.apache.flink.table.data.RowData;
+import 
org.apache.flink.table.runtime.generated.GeneratedNamespaceAggsHandleFunction;
+import 
org.apache.flink.table.runtime.operators.window.windowtvf.common.AbstractWindowOperator;
+import org.apache.flink.table.runtime.typeutils.AbstractRowDataSerializer;
+import org.apache.flink.table.runtime.typeutils.PagedTypeSerializer;
+
+import java.time.ZoneId;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * The {@link AbstractWindowAggOperatorBuilder} is a base class for building 
window aggregate
+ * operators.
+ *
+ * See more details in {@link SlicingWindowAggOperatorBuilder}.
+ *
+ * TODO support UnslicingWindowAggOperatorBuilder.
+ *
+ * @param  The type of the window. {@link Long} for slicing window.
+ * @param  The implementation of the abstract builder.
+ */
+public abstract class AbstractWindowAggOperatorBuilder<

Review Comment:
   Remove it and do this refactor later.



##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/aggregate/window/builder/AbstractWindowAggOperatorBuilder.java:
##
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.operators.aggregate.window.builder;
+
+import org.apache.flink.table.data.RowData;
+import 
org.apache.flink.table.runtime.generated.GeneratedNamespaceAggsHandleFunction;
+import 
org.apache.flink.table.runtime.operators.window.windowtvf.common.AbstractWindowOperator;
+import org.apache.flink.table.runtime.typeutils.AbstractRowDataSerializer;
+import org.apache.flink.table.runtime.typeutils.PagedTypeSerializer;
+
+import java.time.ZoneId;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * The {@link AbstractWindowAggOperatorBuilder} is a base class for building 
window aggregate
+ * operators.
+ *
+ * See more details in {@link SlicingWindowAggOperatorBuilder}.
+ *
+ * TODO support UnslicingWindowAggOperatorBuilder.
+ *
+ * @param  The type of the window. {@link Long} for slicing window.
+ * @param  The implementation of the abstract builder.
+ */
+public abstract class AbstractWindowAggOperatorBuilder<
+W, T extends AbstractWindowAggOperatorBuilder> {
+
+protected AbstractRowDataSerializer inputSerializer;
+protected PagedTypeSerializer keySerializer;
+protected AbstractRowDataSerializer accSerializer;
+protected GeneratedNamespaceAggsHandleFunction 
generatedAggregateFunction;
+protected GeneratedNamespaceAggsHandleFunction 
localGeneratedAggregateFunction;
+protected GeneratedNamespaceAggsHandleFunction 
globalGeneratedAggregateFunction;
+protected ZoneId shiftTimeZone;
+
+public T inputSerializer(AbstractRowDataSerializer 
inputSerializer) {
+this.inputSerializer = inputSerializer;
+return self();
+}
+
+public T shiftTimeZone(ZoneId shiftTimeZone) {
+this.shiftTimeZone = shiftTimeZone;
+return self();
+}
+
+public T 

Re: [PR] [FLINK-34049][table]Refactor classes related to window TVF aggregation to prepare for non-aligned windows [flink]

2024-01-17 Thread via GitHub


xuyangzhong commented on code in PR #24068:
URL: https://github.com/apache/flink/pull/24068#discussion_r1456817108


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/logical/WindowSpec.java:
##
@@ -31,4 +32,15 @@
 public interface WindowSpec {
 
 String toSummaryString(String windowing);
+
+/**
+ * Return if the window is a aligned window.
+ *
+ * See more details about aligned window and unaligned window in {@link
+ * 
org.apache.flink.table.runtime.operators.window.windowtvf.common.AbstractWindowOperator}.
+ *
+ * TODO introduce unaligned window like session window.

Review Comment:
   Add a jira issue for it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33928][table-planner] Should not throw exception while creating view with specify field names [flink]

2024-01-17 Thread via GitHub


swuferhong commented on PR #24096:
URL: https://github.com/apache/flink/pull/24096#issuecomment-1897742954

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-34140) Rename WindowContext and TriggerContext in window

2024-01-17 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-34140:
-

Assignee: xuyang

> Rename WindowContext and TriggerContext in window
> -
>
> Key: FLINK-34140
> URL: https://issues.apache.org/jira/browse/FLINK-34140
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> Currently, WindowContext and TriggerContext not only contains a series of get 
> methods to obtain context information, but also includes behaviors such as 
> clear.
> Maybe it's better to rename them as WindowDelegator and TriggerDelegator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34139) The slice assigner should not reveal its event time or process time at the interface level.

2024-01-17 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-34139:
-

Assignee: xuyang

> The slice assigner should not reveal its event time or process time at the 
> interface level.
> ---
>
> Key: FLINK-34139
> URL: https://issues.apache.org/jira/browse/FLINK-34139
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> Currently, there is a function `boolean isEventTime()` to tell other that it 
> is by event time or process time. However, as an assigner, it should not 
> expose this information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34140) Rename WindowContext and TriggerContext in window

2024-01-17 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34140:
---
Summary: Rename WindowContext and TriggerContext in window  (was: Rename 
WindowContext in window)

> Rename WindowContext and TriggerContext in window
> -
>
> Key: FLINK-34140
> URL: https://issues.apache.org/jira/browse/FLINK-34140
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Priority: Major
>
> Currently, WindowContext not only contains a series of get methods to obtain 
> context information, but also includes behaviors such as clear.
> Maybe it should be renamed as Window



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34140) Rename WindowContext and TriggerContext in window

2024-01-17 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34140:
---
Description: 
Currently, WindowContext and TriggerContext not only contains a series of get 
methods to obtain context information, but also includes behaviors such as 
clear.

Maybe it should be renamed as WindowDelegator and TriggerDelegator.

  was:
Currently, WindowContext not only contains a series of get methods to obtain 
context information, but also includes behaviors such as clear.

Maybe it should be renamed as Window


> Rename WindowContext and TriggerContext in window
> -
>
> Key: FLINK-34140
> URL: https://issues.apache.org/jira/browse/FLINK-34140
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Priority: Major
>
> Currently, WindowContext and TriggerContext not only contains a series of get 
> methods to obtain context information, but also includes behaviors such as 
> clear.
> Maybe it should be renamed as WindowDelegator and TriggerDelegator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34140) Rename WindowContext and TriggerContext in window

2024-01-17 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34140:
---
Description: 
Currently, WindowContext and TriggerContext not only contains a series of get 
methods to obtain context information, but also includes behaviors such as 
clear.

Maybe it's better to rename them as WindowDelegator and TriggerDelegator.

  was:
Currently, WindowContext and TriggerContext not only contains a series of get 
methods to obtain context information, but also includes behaviors such as 
clear.

Maybe it should be renamed as WindowDelegator and TriggerDelegator.


> Rename WindowContext and TriggerContext in window
> -
>
> Key: FLINK-34140
> URL: https://issues.apache.org/jira/browse/FLINK-34140
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Priority: Major
>
> Currently, WindowContext and TriggerContext not only contains a series of get 
> methods to obtain context information, but also includes behaviors such as 
> clear.
> Maybe it's better to rename them as WindowDelegator and TriggerDelegator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34140) Rename WindowContext in window

2024-01-17 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34140:
---
Description: 
Currently, WindowContext not only contains a series of get methods to obtain 
context information, but also includes behaviors such as clear.

Maybe it should be renamed as Window

> Rename WindowContext in window
> --
>
> Key: FLINK-34140
> URL: https://issues.apache.org/jira/browse/FLINK-34140
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Priority: Major
>
> Currently, WindowContext not only contains a series of get methods to obtain 
> context information, but also includes behaviors such as clear.
> Maybe it should be renamed as Window



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34140) Rename WindowContext in window

2024-01-17 Thread xuyang (Jira)
xuyang created FLINK-34140:
--

 Summary: Rename WindowContext in window
 Key: FLINK-34140
 URL: https://issues.apache.org/jira/browse/FLINK-34140
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Runtime
Reporter: xuyang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34138) Improve the interface about MergeCallback in window

2024-01-17 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang reassigned FLINK-34138:
-

Assignee: xuyang

> Improve the interface about MergeCallback in window
> ---
>
> Key: FLINK-34138
> URL: https://issues.apache.org/jira/browse/FLINK-34138
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Assignee: xuyang
>Priority: Major
>
> As a merge method, the return value type is `void`, that is confusing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33565][Scheduler] ConcurrentExceptions works with exception merging [flink]

2024-01-17 Thread via GitHub


1996fanrui commented on code in PR #24003:
URL: https://github.com/apache/flink/pull/24003#discussion_r1456812210


##
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/failover/ExecutionFailureHandlerTest.java:
##
@@ -183,6 +194,55 @@ void testNonRecoverableFailureHandlingResult() throws 
Exception {
 assertThat(executionFailureHandler.getNumberOfRestarts()).isZero();
 }
 
+/** Test isNewAttempt of {@link FailureHandlingResult} is expected. */
+@Test
+void testNewAttemptAndNumberOfRestarts() throws Exception {

Review Comment:
   Good suggestion!
   
   Also, I extracted the `assertHandlerRootException` and 
`assertHandlerConcurrentException` methods, after that, the 
`testNewAttemptAndNumberOfRestarts` is totally simple.



##
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/FixedDelayRestartBackoffTimeStrategy.java:
##
@@ -61,8 +61,9 @@ public long getBackoffTime() {
 }
 
 @Override
-public void notifyFailure(Throwable cause) {
+public boolean notifyFailure(Throwable cause) {
 currentRestartAttempt++;
+return true;

Review Comment:
   > But from a conceptual point of view: shouldn't we cover it also for the 
two other restart strategies. Or am I missing something here?
   
   In the beginning, I want to improve all restart strategies, but I meet some 
feedback to removing other restart strategies during discussion. The core 
background is this thread: 
https://lists.apache.org/thread/l7wyc7pndpsvh2h7hj3fw2td9yphrlox
   
   In brief, 3 reasons:
   
   1. The semantics of option
   
   - the failure-rate strategy's restart upper limit option is named  
`restart-strategy.failure-rate.max-failures-per-interval`
   - It's  `max-failures-per-interval` instead of 
`max-attempts-per-interval`.
   - If we improve it directly, the name and behaviour aren't matched.
   2. We recommend users use the `exponential-delay restart strategy` in the 
future, it's more powerful.
   3. After FLIP-360 discussion, I found `exponential-delay restart strategy` 
can replace other restart strategies directly if users set the 
`restart-strategy.exponential-delay.backoff-multiplier` = 1
   
   Actually, I think other restart-strategies can be deprecated in the future.



##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/exceptionhistory/RootExceptionHistoryEntryTest.java:
##
@@ -81,10 +81,14 @@ void testFromFailureHandlingResultSnapshot() throws 
ExecutionException, Interrup
 final CompletableFuture> rootFailureLabels =
 
CompletableFuture.completedFuture(Collections.singletonMap("key", "value"));
 
-final Throwable concurrentException = new 
IllegalStateException("Expected other failure");
-final ExecutionVertex concurrentlyFailedExecutionVertex = 
extractExecutionVertex(1);
-final long concurrentExceptionTimestamp =
-triggerFailure(concurrentlyFailedExecutionVertex, 
concurrentException);
+final Throwable concurrentException1 = new 
IllegalStateException("Expected other failure1");
+final ExecutionVertex concurrentlyFailedExecutionVertex1 = 
extractExecutionVertex(1);
+Predicate exception1Predicate =
+getExceptionHistoryEntryPredicate(
+concurrentException1, 
concurrentlyFailedExecutionVertex1);
+
+final Throwable concurrentException2 = new 
IllegalStateException("Expected other failure2");
+final ExecutionVertex concurrentlyFailedExecutionVertex2 = 
extractExecutionVertex(2);

Review Comment:
   Thanks for the explanation!
   
   > testAddConecurrentExceptions will use all code of this test 
   
   If we have 2 tests, and test1 only call `testCommon`, and test2 call 
`testCommon` and `testPart2`. It means test2 can cover test1.
   
   2 solutions:
   - keep test1 and test2
   - Only keep test2
   
   In the current scenario, is it enough for us to only keep test2? Looking 
forward to your opinion, fine with me as well. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34138) Improve the interface about MergeCallback in window

2024-01-17 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34138:
---
Description: As a merge method, the return value type is `void`, that is 
confusing.

> Improve the interface about MergeCallback in window
> ---
>
> Key: FLINK-34138
> URL: https://issues.apache.org/jira/browse/FLINK-34138
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Priority: Major
>
> As a merge method, the return value type is `void`, that is confusing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34139) The slice assigner should not reveal its event time or process time at the interface level.

2024-01-17 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34139:
---
Description: Currently, there is a function `boolean isEventTime()` to tell 
other that it is by event time or process time. However, as an assigner, it 
should not expose this information.  (was: Currently, there is a function )

> The slice assigner should not reveal its event time or process time at the 
> interface level.
> ---
>
> Key: FLINK-34139
> URL: https://issues.apache.org/jira/browse/FLINK-34139
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Priority: Major
>
> Currently, there is a function `boolean isEventTime()` to tell other that it 
> is by event time or process time. However, as an assigner, it should not 
> expose this information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34139) The slice assigner should not reveal its event time or process time at the interface level.

2024-01-17 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34139:
---
Summary: The slice assigner should not reveal its event time or process 
time at the interface level.  (was: The window assigner should not reveal its 
event time or process time at the interface level.)

> The slice assigner should not reveal its event time or process time at the 
> interface level.
> ---
>
> Key: FLINK-34139
> URL: https://issues.apache.org/jira/browse/FLINK-34139
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Priority: Major
>
> Currently, there is a function 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34139) The window assigner should not reveal its event time or process time at the interface level.

2024-01-17 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34139:
---
Description: Currently, there is a function 

> The window assigner should not reveal its event time or process time at the 
> interface level.
> 
>
> Key: FLINK-34139
> URL: https://issues.apache.org/jira/browse/FLINK-34139
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: xuyang
>Priority: Major
>
> Currently, there is a function 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34130) Mark setBytes and getBytes of Configuration as @Internal

2024-01-17 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-34130:

Fix Version/s: 2.0.0
   (was: 1.19.0)

>  Mark setBytes and getBytes of Configuration as @Internal
> -
>
> Key: FLINK-34130
> URL: https://issues.apache.org/jira/browse/FLINK-34130
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34132][runtime] Correct the error message and doc of AdaptiveBatch only supports all edges being BLOCKING or HYBRID_FULL/HYBRID_SELECTIVE. [flink]

2024-01-17 Thread via GitHub


JunRuiLee commented on PR #24118:
URL: https://github.com/apache/flink/pull/24118#issuecomment-1897703554

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34139) The window assigner should not reveal its event time or process time at the interface level.

2024-01-17 Thread xuyang (Jira)
xuyang created FLINK-34139:
--

 Summary: The window assigner should not reveal its event time or 
process time at the interface level.
 Key: FLINK-34139
 URL: https://issues.apache.org/jira/browse/FLINK-34139
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Runtime
Reporter: xuyang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34132][runtime] Correct the error message and doc of AdaptiveBatch only supports all edges being BLOCKING or HYBRID_FULL/HYBRID_SELECTIVE. [flink]

2024-01-17 Thread via GitHub


JunRuiLee commented on code in PR #24118:
URL: https://github.com/apache/flink/pull/24118#discussion_r1456765220


##
docs/content/docs/deployment/elastic_scaling.md:
##
@@ -238,7 +238,7 @@ In addition, there are several related configuration 
options that may need adjus
 ### Limitations
 
 - **Batch jobs only**: Adaptive Batch Scheduler only supports batch jobs. 
Exception will be thrown if a streaming job is submitted.
-- **BLOCKING or HYBRID jobs only**: At the moment, Adaptive Batch Scheduler 
only supports jobs whose [shuffle mode]({{< ref "docs/deployment/config" 
>}}#execution-batch-shuffle-mode) is `ALL_EXCHANGES_BLOCKING / 
ALL_EXCHANGES_HYBRID_FULL / ALL_EXCHANGES_HYBRID_SELECTIVE`.
+- **BLOCKING or HYBRID jobs only**: At the moment, Adaptive Batch Scheduler 
only supports DataStream jobs whose [shuffle mode]({{< ref 
"docs/deployment/config" >}}#execution-batch-shuffle-mode) is 
`ALL_EXCHANGES_BLOCKING / ALL_EXCHANGES_HYBRID_FULL / 
ALL_EXCHANGES_HYBRID_SELECTIVE` and DataSet jobs whose `ExecutionMode` is 
`BATCH_FORCED`.

Review Comment:
   I would prefer to clarify here on how to enforce the ALL BLOCKING for 
DataSet jobs, considering that the usage of AdaptiveBatch with DataSet jobs has 
already been detailed in the official documentation for users.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Update operators link in First Steps documentation [flink]

2024-01-17 Thread via GitHub


flinkbot commented on PR #24126:
URL: https://github.com/apache/flink/pull/24126#issuecomment-1897701307

   
   ## CI report:
   
   * 352762507edb48e1eb514c16d799a1b8afad4cf8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34138) Improve the interface about MergeCallback in window

2024-01-17 Thread xuyang (Jira)
xuyang created FLINK-34138:
--

 Summary: Improve the interface about MergeCallback in window
 Key: FLINK-34138
 URL: https://issues.apache.org/jira/browse/FLINK-34138
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Runtime
Reporter: xuyang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Update operators link in First Steps documentation [flink]

2024-01-17 Thread via GitHub


ryanvickr opened a new pull request, #24126:
URL: https://github.com/apache/flink/pull/24126

   The link leads to a blank operators page which is confusing, when it should 
really lead to the operators overview page.
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33928][table-planner] Should not throw exception while creating view with specify field names [flink]

2024-01-17 Thread via GitHub


swuferhong commented on PR #24096:
URL: https://github.com/apache/flink/pull/24096#issuecomment-1897687775

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33928][table-planner] Should not throw exception while creating view with specify field names [flink]

2024-01-17 Thread via GitHub


swuferhong commented on PR #24096:
URL: https://github.com/apache/flink/pull/24096#issuecomment-1897681337

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34072) Use JAVA_RUN in shell scripts

2024-01-17 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang resolved FLINK-34072.
--
Resolution: Fixed

merged in master: e7c8cd1562ebd45c1f7b48f519a11c6cd4fdf100

> Use JAVA_RUN in shell scripts
> -
>
> Key: FLINK-34072
> URL: https://issues.apache.org/jira/browse/FLINK-34072
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Scripts
>Reporter: Yun Tang
>Assignee: Yu Chen
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> We should call {{JAVA_RUN}} in all cases when we launch {{java}} command, 
> otherwise we might be able to run the {{java}} if JAVA_HOME is not set.
> such as:
> {code:java}
> flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT/bin/config.sh: line 339: > 17 : 
> syntax error: operand expected (error token is "> 17 ")
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34072][scripts] Replace java command to JAVA_RUN in config.sh [flink]

2024-01-17 Thread via GitHub


Myasuka merged PR #24085:
URL: https://github.com/apache/flink/pull/24085


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31511][doc-zh] Update the document sql_functions_zh.yml and fix some parts in sql_functions.yml [flink]

2024-01-17 Thread via GitHub


ruanhang1993 commented on PR #5:
URL: https://github.com/apache/flink/pull/5#issuecomment-1897676303

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31449][resourcemanager] Remove DeclarativeSlotManager related logic [flink]

2024-01-17 Thread via GitHub


huwh commented on PR #24102:
URL: https://github.com/apache/flink/pull/24102#issuecomment-1897673795

   @KarmaGYZ @reswqa Can you help review it? Thanks a lot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34119) Improve description about changelog in document

2024-01-17 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu resolved FLINK-34119.
--
Fix Version/s: 1.19.0
   Resolution: Fixed

merged 2ec8f81 into master

> Improve description about changelog in document
> ---
>
> Key: FLINK-34119
> URL: https://issues.apache.org/jira/browse/FLINK-34119
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Hangxiang Yu
>Assignee: Hangxiang Yu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Since we have resolved some issues and marked as prodution-ready in [release 
> note|https://flink.apache.org/2022/10/28/announcing-the-release-of-apache-flink-1.16/#generalized-incremental-checkpoint],
> we could update some description about it in doc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34119][doc] Improve description about changelog in document [flink]

2024-01-17 Thread via GitHub


masteryhx merged PR #24111:
URL: https://github.com/apache/flink/pull/24111


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21949][table] Support ARRAY_AGG aggregate function [flink]

2024-01-17 Thread via GitHub


Jiabao-Sun commented on PR #23411:
URL: https://github.com/apache/flink/pull/23411#issuecomment-1897661962

   > Thanks for addressing comments in general it looks ok from my side i guess 
there is one little thing: since it is based on Calcite parser it allows to 
have `ORDER BY` inside... At the same time it is currently not supported on 
Flink level, not sure whether we can redefine this behavior however at least it 
would make sense to mention it in doc that it is not supported
   
   Yes, ORDER BY allows sorting of any field in the input rows, but currently 
it is difficult to obtain the complete input rows for sorting in the function 
implementation. Therefore, the `ORDER BY` clause is not supported yet.
   I have added an explanation in the documentation.
   
   @snuyanzin, please help take a look again when you have time.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34119][doc] Improve description about changelog in document [flink]

2024-01-17 Thread via GitHub


masteryhx commented on PR #24111:
URL: https://github.com/apache/flink/pull/24111#issuecomment-1897661018

   > @masteryhx Thanks for the contribution, LGTM. CI seems broken by 
https://issues.apache.org/jira/projects/FLINK/issues/FLINK-34077
   
   Thanks for the review.
   The failed python cases have failed many PR which is not related to this.
   So it doesn't block this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34080][configuration] Simplify the Configuration [flink]

2024-01-17 Thread via GitHub


1996fanrui commented on PR #24088:
URL: https://github.com/apache/flink/pull/24088#issuecomment-1897658106

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-30656) Provide more logs for schema compatibility check

2024-01-17 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807968#comment-17807968
 ] 

Hangxiang Yu commented on FLINK-30656:
--

We should support to remain some messages for TypeSerializerSchemaCompatibility 
just like SchemaCompatibility in Avro.

Then every TypeSerializer could defined their own message about compatibility.

I have two proposals:

1. Add new method called TypeSerializerSchemaCompatibility#incompatible and 
#compatibleAfterMigration to support message, e.g. 
TypeSerializerSchemaCompatibility#incompatible(String message). And deprecated 
related old methods.
{code:java}
public static  TypeSerializerSchemaCompatibility incompatible(String 
message) {
return new TypeSerializerSchemaCompatibility<>(Type.INCOMPATIBLE, message, 
null);
} {code}
2. Add a new method called TypeSerializerSchemaCompatibility#withMessage:

 
{code:java}
private TypeSerializerSchemaCompatibility withMessage(String message) {
this.message = message;
return this;
} {code}
Proposal 1 behaves just like SchemaCompatibility in Avro who forces caller to 
add message. But since TypeSerializerSchemaCompatibility is a PublicEvolving 
API, maybe we need a FLIP firstly?
Proposal 2 just add a new method so that we will not break change, but every 
callers (including some custom-defined TypeSerializers) should call it manually 
because it will not fail when compile.
[~leonard] WDYT?
 

> Provide more logs for schema compatibility check
> 
>
> Key: FLINK-30656
> URL: https://issues.apache.org/jira/browse/FLINK-30656
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System
>Reporter: Hangxiang Yu
>Assignee: Hangxiang Yu
>Priority: Major
>
> Currently, we have very few logs and exception info when checking schema 
> compatibility.
> It's difficult to see why the compatibility is not compatible, especially for 
> some complicated nested serializers.
> For example, for map serializer, when it's not compatible, we may only see 
> below without other information:
> {code:java}
> Caused by: org.apache.flink.util.StateMigrationException: The new state 
> serializer 
> (org.apache.flink.api.common.typeutils.base.MapSerializer@e95e076a) must not 
> be incompatible with the old state serializer 
> (org.apache.flink.api.common.typeutils.base.MapSerializer@c33b100f). {code}
> So I think we could add more infos when checking the compatibility.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34133) Merge MiniBatchInterval when propagate traits to child block

2024-01-17 Thread zhouli (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807965#comment-17807965
 ] 

zhouli commented on FLINK-34133:


Hi [~xuyangzhong] , I have put the code and plan in the attachments area.

> Merge MiniBatchInterval when propagate traits to child block
> 
>
> Key: FLINK-34133
> URL: https://issues.apache.org/jira/browse/FLINK-34133
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: zhouli
>Priority: Major
>  Labels: pull-request-available
> Attachments: StreamWindowSQLExample.java, 
> image-2024-01-17-20-31-30-975.png, image-2024-01-17-20-34-30-039.png, plan.txt
>
>
>  
> we should merge MiniBatchInterval when propagate traits to child block, 
> otherwise we may get wrong MiniBatchInterval in MinibatchAssigner. For 
> example:
> !image-2024-01-17-20-31-30-975.png!
> !image-2024-01-17-20-34-30-039.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34133) Merge MiniBatchInterval when propagate traits to child block

2024-01-17 Thread zhouli (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouli updated FLINK-34133:
---
Attachment: plan.txt

> Merge MiniBatchInterval when propagate traits to child block
> 
>
> Key: FLINK-34133
> URL: https://issues.apache.org/jira/browse/FLINK-34133
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: zhouli
>Priority: Major
>  Labels: pull-request-available
> Attachments: StreamWindowSQLExample.java, 
> image-2024-01-17-20-31-30-975.png, image-2024-01-17-20-34-30-039.png, plan.txt
>
>
>  
> we should merge MiniBatchInterval when propagate traits to child block, 
> otherwise we may get wrong MiniBatchInterval in MinibatchAssigner. For 
> example:
> !image-2024-01-17-20-31-30-975.png!
> !image-2024-01-17-20-34-30-039.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34133) Merge MiniBatchInterval when propagate traits to child block

2024-01-17 Thread zhouli (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouli updated FLINK-34133:
---
Attachment: StreamWindowSQLExample.java

> Merge MiniBatchInterval when propagate traits to child block
> 
>
> Key: FLINK-34133
> URL: https://issues.apache.org/jira/browse/FLINK-34133
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: zhouli
>Priority: Major
>  Labels: pull-request-available
> Attachments: StreamWindowSQLExample.java, 
> image-2024-01-17-20-31-30-975.png, image-2024-01-17-20-34-30-039.png
>
>
>  
> we should merge MiniBatchInterval when propagate traits to child block, 
> otherwise we may get wrong MiniBatchInterval in MinibatchAssigner. For 
> example:
> !image-2024-01-17-20-31-30-975.png!
> !image-2024-01-17-20-34-30-039.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34133) Merge MiniBatchInterval when propagate traits to child block

2024-01-17 Thread zhouli (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouli updated FLINK-34133:
---
Attachment: (was: MinibatchIntervalTest.java)

> Merge MiniBatchInterval when propagate traits to child block
> 
>
> Key: FLINK-34133
> URL: https://issues.apache.org/jira/browse/FLINK-34133
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: zhouli
>Priority: Major
>  Labels: pull-request-available
> Attachments: StreamWindowSQLExample.java, 
> image-2024-01-17-20-31-30-975.png, image-2024-01-17-20-34-30-039.png
>
>
>  
> we should merge MiniBatchInterval when propagate traits to child block, 
> otherwise we may get wrong MiniBatchInterval in MinibatchAssigner. For 
> example:
> !image-2024-01-17-20-31-30-975.png!
> !image-2024-01-17-20-34-30-039.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34132][runtime] Correct the error message and doc of AdaptiveBatch only supports all edges being BLOCKING or HYBRID_FULL/HYBRID_SELECTIVE. [flink]

2024-01-17 Thread via GitHub


wanglijie95 commented on code in PR #24118:
URL: https://github.com/apache/flink/pull/24118#discussion_r1456018455


##
docs/content/docs/deployment/elastic_scaling.md:
##
@@ -238,7 +238,7 @@ In addition, there are several related configuration 
options that may need adjus
 ### Limitations
 
 - **Batch jobs only**: Adaptive Batch Scheduler only supports batch jobs. 
Exception will be thrown if a streaming job is submitted.
-- **BLOCKING or HYBRID jobs only**: At the moment, Adaptive Batch Scheduler 
only supports jobs whose [shuffle mode]({{< ref "docs/deployment/config" 
>}}#execution-batch-shuffle-mode) is `ALL_EXCHANGES_BLOCKING / 
ALL_EXCHANGES_HYBRID_FULL / ALL_EXCHANGES_HYBRID_SELECTIVE`.
+- **BLOCKING or HYBRID jobs only**: At the moment, Adaptive Batch Scheduler 
only supports DataStream jobs whose [shuffle mode]({{< ref 
"docs/deployment/config" >}}#execution-batch-shuffle-mode) is 
`ALL_EXCHANGES_BLOCKING / ALL_EXCHANGES_HYBRID_FULL / 
ALL_EXCHANGES_HYBRID_SELECTIVE` and DataSet jobs whose `ExecutionMode` is 
`BATCH_FORCED`.

Review Comment:
   As we discussed offline, we should state that adaptive batch scheduler does 
not support `DataSet` jobs here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32514] Support configuring checkpointing interval during process backlog [flink]

2024-01-17 Thread via GitHub


lindong28 commented on code in PR #22931:
URL: https://github.com/apache/flink/pull/22931#discussion_r1456726081


##
flink-runtime/src/main/java/org/apache/flink/runtime/operators/coordination/OperatorCoordinatorHolder.java:
##
@@ -143,12 +144,14 @@ private OperatorCoordinatorHolder(
 
 public void lazyInitialize(
 GlobalFailureHandler globalFailureHandler,
-ComponentMainThreadExecutor mainThreadExecutor) {
+ComponentMainThreadExecutor mainThreadExecutor,
+@Nullable CheckpointCoordinator checkpointCoordinator) {
 
 this.globalFailureHandler = globalFailureHandler;
 this.mainThreadExecutor = mainThreadExecutor;
+context.lazyInitialize(globalFailureHandler, mainThreadExecutor, 
checkpointCoordinator);
 
-context.lazyInitialize(globalFailureHandler, mainThreadExecutor);
+context.lazyInitialize(globalFailureHandler, mainThreadExecutor, 
checkpointCoordinator);

Review Comment:
   @XComp Thanks for catching this. No, we didn't do it intentionally. I 
believe it is introduced when Yunfeng rebased the PR. I didn't catch this issue 
because I didn't go over the entire PR end-to-end very carefully when there is 
only minor remaining comments in the last 2 rounds of review.
   
   It seems that the extra invocation of `lazyInitialize()` would not introduce 
any visible performance or correctness issue. Maybe one of us can fix it in our 
next PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32514] Support configuring checkpointing interval during process backlog [flink]

2024-01-17 Thread via GitHub


lindong28 commented on code in PR #22931:
URL: https://github.com/apache/flink/pull/22931#discussion_r1456726081


##
flink-runtime/src/main/java/org/apache/flink/runtime/operators/coordination/OperatorCoordinatorHolder.java:
##
@@ -143,12 +144,14 @@ private OperatorCoordinatorHolder(
 
 public void lazyInitialize(
 GlobalFailureHandler globalFailureHandler,
-ComponentMainThreadExecutor mainThreadExecutor) {
+ComponentMainThreadExecutor mainThreadExecutor,
+@Nullable CheckpointCoordinator checkpointCoordinator) {
 
 this.globalFailureHandler = globalFailureHandler;
 this.mainThreadExecutor = mainThreadExecutor;
+context.lazyInitialize(globalFailureHandler, mainThreadExecutor, 
checkpointCoordinator);
 
-context.lazyInitialize(globalFailureHandler, mainThreadExecutor);
+context.lazyInitialize(globalFailureHandler, mainThreadExecutor, 
checkpointCoordinator);

Review Comment:
   @XComp Thanks for catching this. No, we didn't do it intentionally. I 
believe it is introduced when Yunfeng rebased the. I didn't catch this issue 
because I didn't go over the entire PR end-to-end very carefully when there is 
only minor remaining comments in the last 2 rounds of review.
   
   It seems that the extra invocation of `lazyInitialize()` would not introduce 
any visible performance or correctness issue. Maybe one of us can fix it in our 
next PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34133) Merge MiniBatchInterval when propagate traits to child block

2024-01-17 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807963#comment-17807963
 ] 

xuyang edited comment on FLINK-34133 at 1/18/24 1:38 AM:
-

Hi, [~Leo Zhou] can you attach the code and plan as text in the `code` block? 
I'll try to re-produce this bug in my local environment.


was (Author: xuyangzhong):
Hi, can you attach the code and plan as text in the `code` block? I'll try to 
re-produce this bug in my local environment.

> Merge MiniBatchInterval when propagate traits to child block
> 
>
> Key: FLINK-34133
> URL: https://issues.apache.org/jira/browse/FLINK-34133
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: zhouli
>Priority: Major
>  Labels: pull-request-available
> Attachments: MinibatchIntervalTest.java, 
> image-2024-01-17-20-31-30-975.png, image-2024-01-17-20-34-30-039.png
>
>
>  
> we should merge MiniBatchInterval when propagate traits to child block, 
> otherwise we may get wrong MiniBatchInterval in MinibatchAssigner. For 
> example:
> !image-2024-01-17-20-31-30-975.png!
> !image-2024-01-17-20-34-30-039.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34133) Merge MiniBatchInterval when propagate traits to child block

2024-01-17 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807963#comment-17807963
 ] 

xuyang commented on FLINK-34133:


Hi, can you attach the code and plan as text in the `code` block? I'll try to 
re-produce this bug in my local environment.

> Merge MiniBatchInterval when propagate traits to child block
> 
>
> Key: FLINK-34133
> URL: https://issues.apache.org/jira/browse/FLINK-34133
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: zhouli
>Priority: Major
>  Labels: pull-request-available
> Attachments: MinibatchIntervalTest.java, 
> image-2024-01-17-20-31-30-975.png, image-2024-01-17-20-34-30-039.png
>
>
>  
> we should merge MiniBatchInterval when propagate traits to child block, 
> otherwise we may get wrong MiniBatchInterval in MinibatchAssigner. For 
> example:
> !image-2024-01-17-20-31-30-975.png!
> !image-2024-01-17-20-34-30-039.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34132][runtime] Correct the error message and doc of AdaptiveBatch only supports all edges being BLOCKING or HYBRID_FULL/HYBRID_SELECTIVE. [flink]

2024-01-17 Thread via GitHub


JunRuiLee commented on PR #24118:
URL: https://github.com/apache/flink/pull/24118#issuecomment-1897604825

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34126][configuration] Correct the description of jobmanager.scheduler [flink]

2024-01-17 Thread via GitHub


JunRuiLee commented on PR #24112:
URL: https://github.com/apache/flink/pull/24112#issuecomment-1897604085

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34083][config] Deprecate string configuration keys and unused constants in ConfigConstants [flink]

2024-01-17 Thread via GitHub


Sxnan commented on PR #24089:
URL: https://github.com/apache/flink/pull/24089#issuecomment-1897598262

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21949][table] Support ARRAY_AGG aggregate function [flink]

2024-01-17 Thread via GitHub


snuyanzin commented on PR #23411:
URL: https://github.com/apache/flink/pull/23411#issuecomment-1897393850

   Thanks for addressing comments 
   in general it looks ok from my side
   i guess there is one little thing: since it is based on Calcite parser it 
allows to have `ORDER BY` inside...
   At the same time it is currently not supported on Flink level, not sure 
whether we can redefine this behavior however at least it would make sense to 
mention it in doc that it is not supported 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Synchronize CI pipeline setup [flink-connector-kafka]

2024-01-17 Thread via GitHub


snuyanzin commented on PR #78:
URL: 
https://github.com/apache/flink-connector-kafka/pull/78#issuecomment-1896939681

   yeah... initially there were already 2 jdks (8, 11) in ci for PRs and that's 
why I've just added other jdks there...
   Now amount of ci jobs per is growing together with number of jdks + possibly 
twice because of python...
   
   it would make sense to follow something similar we have in Flink main repo 
where everything is built with jdk8 for PRs and all other jdks are 
participating only during nightlies 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32416) Initial DynamicKafkaSource Implementation

2024-01-17 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807888#comment-17807888
 ] 

Martijn Visser commented on FLINK-32416:


Fixed in apache/flink-connector-kafka:main

initial implementation of DynamicKafkaSource with bounded/unbounded support and 
unit/integration tests eaeb7817788a2da6fed3d9433850e10499e91852

Fix flaky tests by ensuring test utilities produce records with consistency and 
cleanup notify no more splits to ensure it is sent 
cdfa328b5ec34d711ae2c9e93de6de7565fd1db6


> Initial DynamicKafkaSource Implementation 
> --
>
> Key: FLINK-32416
> URL: https://issues.apache.org/jira/browse/FLINK-32416
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Affects Versions: kafka-3.1.0
>Reporter: Mason Chen
>Assignee: Mason Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: kafka-3.1.0
>
>
> Implementation that supports unbounded and bounded modes. With a default 
> implementation of KafkaMetadataService



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32416) Initial DynamicKafkaSource Implementation

2024-01-17 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-32416.
--
Resolution: Fixed

> Initial DynamicKafkaSource Implementation 
> --
>
> Key: FLINK-32416
> URL: https://issues.apache.org/jira/browse/FLINK-32416
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Affects Versions: kafka-3.1.0
>Reporter: Mason Chen
>Assignee: Mason Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: kafka-3.1.0
>
>
> Implementation that supports unbounded and bounded modes. With a default 
> implementation of KafkaMetadataService



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32416] Fix flaky tests by ensuring test utilities produce records w… [flink-connector-kafka]

2024-01-17 Thread via GitHub


MartijnVisser merged PR #79:
URL: https://github.com/apache/flink-connector-kafka/pull/79


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32416] Fix flaky tests by ensuring test utilities produce records w… [flink-connector-kafka]

2024-01-17 Thread via GitHub


MartijnVisser commented on PR #79:
URL: 
https://github.com/apache/flink-connector-kafka/pull/79#issuecomment-1896516685

   > which thing to fix are you referring to?
   
   I misread 
https://github.com/apache/flink-connector-kafka/pull/79#discussion_r1454686838 
as that you still wanted to fix something  
   
   I'll merge it after this CI run, thanks :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32622][table-planner] Optimize mini-batch assignment [flink]

2024-01-17 Thread via GitHub


jeyhunkarimov commented on PR #23470:
URL: https://github.com/apache/flink/pull/23470#issuecomment-1896341427

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33694][gs-fs-hadoop] Support overriding GCS root URL (backport to 1.18) [flink]

2024-01-17 Thread via GitHub


flinkbot commented on PR #24125:
URL: https://github.com/apache/flink/pull/24125#issuecomment-1896298802

   
   ## CI report:
   
   * 4d83dffb1e1ba29ef45c92c130c25a79491cef71 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33694][gs-fs-hadoop] Support overriding GCS root URL (backport to 1.17) [flink]

2024-01-17 Thread via GitHub


flinkbot commented on PR #24124:
URL: https://github.com/apache/flink/pull/24124#issuecomment-1896297492

   
   ## CI report:
   
   * 7b779ec4eae3df491b3f3a2b72bca99c996e3473 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33365] include filters with Lookup joins [flink-connector-jdbc]

2024-01-17 Thread via GitHub


davidradl commented on PR #79:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/79#issuecomment-1896292338

   It looks like we have used the approach from 
[https://stackoverflow.com/questions/2309970/named-parameters-in-jdbc](https://stackoverflow.com/questions/2309970/named-parameters-in-jdbc
 ). It says `Please note that the above simple example does not handle using 
named parameter twice. Nor does it handle using the : sign inside quotes.`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33694][gs-fs-hadoop] Support overriding GCS root URL (backport to 1.18) [flink]

2024-01-17 Thread via GitHub


patricklucas opened a new pull request, #24125:
URL: https://github.com/apache/flink/pull/24125

   This is a backport of #23836 to 1.18.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33694][gs-fs-hadoop] Support overriding GCS root URL (backport to 1.17) [flink]

2024-01-17 Thread via GitHub


patricklucas opened a new pull request, #24124:
URL: https://github.com/apache/flink/pull/24124

   This is a backport of #23836 to 1.17.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32622][table-planner] Optimize mini-batch assignment [flink]

2024-01-17 Thread via GitHub


jeyhunkarimov commented on PR #23470:
URL: https://github.com/apache/flink/pull/23470#issuecomment-1896275667

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33694][gs-fs-hadoop] Support overriding GCS root URL [flink]

2024-01-17 Thread via GitHub


patricklucas commented on PR #23836:
URL: https://github.com/apache/flink/pull/23836#issuecomment-1896272486

   @XComp sure, coming up shortly.
   
   Thanks for the review, @afedulov.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33694][gs-fs-hadoop] Support overriding GCS root URL [flink]

2024-01-17 Thread via GitHub


XComp commented on PR #23836:
URL: https://github.com/apache/flink/pull/23836#issuecomment-1896250037

   Sure. Could you provide backport PRs for 1.18 and 1.17? Then I can merge all 
the necessary PRs at once.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33844][doc] Update japicmp configuration for 1.18.1 [flink]

2024-01-17 Thread via GitHub


flinkbot commented on PR #24123:
URL: https://github.com/apache/flink/pull/24123#issuecomment-1896231867

   
   ## CI report:
   
   * 8e6b149eae970cae1d2cb8c497943694124907d7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33803] Set observedGeneration at end of reconciliation [flink-kubernetes-operator]

2024-01-17 Thread via GitHub


justin-chen commented on code in PR #755:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/755#discussion_r1456093406


##
flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/FlinkDeploymentStatus.java:
##
@@ -55,4 +55,7 @@ public class FlinkDeploymentStatus extends 
CommonStatus {
 
 /** Information about the TaskManagers for the scale subresource. */
 private TaskManagerInfo taskManager;
+
+/** Last observed generation of the FlinkDeployment. */
+private Long observedGeneration;

Review Comment:
   > we should set the new field next to where the old one is set so that the 
old logic is easy to remove later
   
   @gyfora Not sure I understand, is the intention to duplicate the 
`observedGeneration` such that it's available via both 
`status.observedGeneration` (we need it at this top level) and 
`status.reconciliationStatus.lastReconciledSpec.resource_metadata`?
   
   Given the status currently looks like this (with my current PR):
   ```
   status:
   ...
   observedGeneration: 123
   reconciliationStatus:
   lastReconciledSpec: '{
   "spec": {
   ...
   },
   "resource_metadata": {
   "apiVersion": "flink.apache.org/v1beta1",
   "metadata": {   // io.fabric8.kubernetes.api.model.ObjectMeta
   "generation": 123
   },
   "firstDeployment": false
   }
   }'
   lastStableSpec: '{...}'
   reconciliationTimestamp:
   state: DEPLOYED
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33776) Allow to specify optional profile for connectors

2024-01-17 Thread Etienne Chauchot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Etienne Chauchot updated FLINK-33776:
-
Component/s: Build System / CI

> Allow to specify optional profile for connectors
> 
>
> Key: FLINK-33776
> URL: https://issues.apache.org/jira/browse/FLINK-33776
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI, Connectors / Parent
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: connector-parent-1.1.0
>
>
> The issue is that sometimes the connector should be tested against several 
> versions of sinks/sources
> e.g. hive connector should be tested against hive 2 and hive3, opensearch 
> should be tested against 1 and 2
> one of the way is using profiles for that



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33776) Allow to specify optional profile for connectors

2024-01-17 Thread Etienne Chauchot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Etienne Chauchot updated FLINK-33776:
-
Issue Type: Improvement  (was: Bug)

> Allow to specify optional profile for connectors
> 
>
> Key: FLINK-33776
> URL: https://issues.apache.org/jira/browse/FLINK-33776
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Parent
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: connector-parent-1.1.0
>
>
> The issue is that sometimes the connector should be tested against several 
> versions of sinks/sources
> e.g. hive connector should be tested against hive 2 and hive3, opensearch 
> should be tested against 1 and 2
> one of the way is using profiles for that



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33776) Allow to specify optional profile for connectors

2024-01-17 Thread Etienne Chauchot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Etienne Chauchot updated FLINK-33776:
-
Fix Version/s: (was: connector-parent-1.1.0)

> Allow to specify optional profile for connectors
> 
>
> Key: FLINK-33776
> URL: https://issues.apache.org/jira/browse/FLINK-33776
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI, Connectors / Parent
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> The issue is that sometimes the connector should be tested against several 
> versions of sinks/sources
> e.g. hive connector should be tested against hive 2 and hive3, opensearch 
> should be tested against 1 and 2
> one of the way is using profiles for that



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33776) Allow to specify optional profile for connectors

2024-01-17 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807803#comment-17807803
 ] 

Etienne Chauchot edited comment on FLINK-33776 at 1/17/24 5:01 PM:
---

[~Sergey Nuyanzin] I'm releasing flink-connector-parent and I'm reviewing the 
release notes. I think this ticket should be classified differently. 
I did:
- type = improvement instead of bug because this ticket is adding a new ability
- remove unrelated link to PR 23910
- this ticket touches only the CI and not the connector-parent pom. as it is 
related to connectors still I'd put Build/CI + connector/parent as components 
but remove the connector-parent fix version.

Feel free to change if you disagree.



was (Author: echauchot):
[~Sergey Nuyanzin] I'm releasing flink-connector-parent and I'm reviewing the 
release notes. I think this ticket should be classified differently. 
I did:
- type = improvement instead of bug because this ticket is adding a new ability
- remove unrelated link to PR 23910
- this ticket touches only the CI and not the connector-parent pom. as it is 
related to connectors still I'd put Build/CI + connector/parent as components 
but remove the connector-parent fix version.
Feel free to change if you disagree.


> Allow to specify optional profile for connectors
> 
>
> Key: FLINK-33776
> URL: https://issues.apache.org/jira/browse/FLINK-33776
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: connector-parent-1.1.0
>
>
> The issue is that sometimes the connector should be tested against several 
> versions of sinks/sources
> e.g. hive connector should be tested against hive 2 and hive3, opensearch 
> should be tested against 1 and 2
> one of the way is using profiles for that



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33776) Allow to specify optional profile for connectors

2024-01-17 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807803#comment-17807803
 ] 

Etienne Chauchot edited comment on FLINK-33776 at 1/17/24 5:01 PM:
---

[~Sergey Nuyanzin] I'm releasing flink-connector-parent and I'm reviewing the 
release notes. I think this ticket should be classified differently. 
I did:
- type = improvement instead of bug because this ticket is adding a new ability
- remove unrelated link to PR 23910
- this ticket touches only the CI and not the connector-parent pom. as it is 
related to connectors still I'd put Build/CI + connector/parent as components 
but remove the connector-parent fix version.
Feel free to change if you disagree.



was (Author: echauchot):
[~Sergey Nuyanzin] I'm releasing flink-connector-parent and I'm reviewing the 
release notes. I think this ticket should be classified differently. 
I'd do:
- type = improvement instead of bug 
- remove unrelated link to PR 23910
- this ticket touches only the CI and not the connector-parent pom. as it is 
related to connectors still I'd put Build/CI + connector/parent as components 
but remove the fix version.

WDYT ?


> Allow to specify optional profile for connectors
> 
>
> Key: FLINK-33776
> URL: https://issues.apache.org/jira/browse/FLINK-33776
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: connector-parent-1.1.0
>
>
> The issue is that sometimes the connector should be tested against several 
> versions of sinks/sources
> e.g. hive connector should be tested against hive 2 and hive3, opensearch 
> should be tested against 1 and 2
> one of the way is using profiles for that



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34135) A number of ci failures with Access to the path '.../_work/_temp/containerHandlerInvoker.js' is denied.

2024-01-17 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807809#comment-17807809
 ] 

Sergey Nuyanzin edited comment on FLINK-34135 at 1/17/24 4:56 PM:
--

most of them yes
there is a couple of reproductions with *AlibabaCI006*
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56486=logs=9cada3cb-c1d3-5621-16da-0f718fb86602]
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56486=logs=fc5181b0-e452-5c8f-68de-1097947f6483=6e6da9c7-2448-523d-ca43-b6f326469c3d=10

Also *AlibabaCI002* and *AlibabaCI003* are failing with 
{noformat}
[error]Could not find a part of the path 
'/home/agent01/myagent/_work/_temp/containerHandlerInvoker.js'
{noformat}
*AlibabaCI002*
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56482=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=36f187fe-972b-5dcd-fbe7-74e193d0fc1f=9

*AlibabaCI003*
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56482=logs=d89de3df-4600-5585-dadc-9bbc9a5e661c=1f6c33d4-eb81-529f-4844-d14d67a2c6f7=9
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56482=logs=b6f8a893-8f59-51d5-fe28-fb56a8b0932c=048b100e-e87d-5e10-7bc7-0dd1c9aa5dd0=9
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56482=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=bc4c9170-c121-5244-cb07-eb2bb41ef63d=9


was (Author: sergey nuyanzin):
most of them yes
there is a couple of reproductions with *AlibabaCI006*
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56486=logs=9cada3cb-c1d3-5621-16da-0f718fb86602]
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56486=logs=fc5181b0-e452-5c8f-68de-1097947f6483=6e6da9c7-2448-523d-ca43-b6f326469c3d=10

Also *AlibabaCI002* is failing with 
{noformat}
[error]Could not find a part of the path 
'/home/agent01/myagent/_work/_temp/containerHandlerInvoker.js'
{noformat}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56482=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=36f187fe-972b-5dcd-fbe7-74e193d0fc1f=9

> A number of ci failures with Access to the path 
> '.../_work/_temp/containerHandlerInvoker.js' is denied.
> ---
>
> Key: FLINK-34135
> URL: https://issues.apache.org/jira/browse/FLINK-34135
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Assignee: Jeyhun Karimov
>Priority: Blocker
>  Labels: test-stability
>
> There is a number of builds failing with something like 
> {noformat}
> ##[error]Access to the path 
> '/home/agent03/myagent/_work/_temp/containerHandlerInvoker.js' is denied.
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56490=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=fb588352-ef18-568d-b447-699986250ccb
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=554d7c3f-d38e-55f4-96b4-ada3a9cb7d6f=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=fa307d6d-91b1-5ab6-d460-ef50f552b1fe=1798d435-832b-51fe-a9ad-efb9abf4ab04=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=a1ac4ce4-9a4f-5fdb-3290-7e163fba19dc=e4c57254-ec06-5788-3f8e-5ad5dffb418e=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=56881383-f398-5091-6b3b-22a7eeb7cfa8=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=2d9c27d0-8dbb-5be9-7271-453f74f48ab3=9
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=162f98f7-8967-5f47-2782-a1e178ec2ad3=c9934c56-710d-5f85-d2b8-28ec1fd700ed=9



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix] Fix flaky tests by ensuring test utilities produce records w… [flink-connector-kafka]

2024-01-17 Thread via GitHub


mas-chen commented on PR #79:
URL: 
https://github.com/apache/flink-connector-kafka/pull/79#issuecomment-1896211992

   @MartijnVisser which thing to fix are you referring to? I think this PR is 
ready as-is if it blocking your other work. The ticket I'll need a few days to 
look into, probably get to it at the end of the week. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >