Re: [PR] [FLINK-34401][docs-zh] Translate "Flame Graphs" page into Chinese [flink]

2024-02-09 Thread via GitHub


lxliyou001 commented on PR #24279:
URL: https://github.com/apache/flink/pull/24279#issuecomment-1936908761

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34401][docs-zh] Translate "Flame Graphs" page into Chinese [flink]

2024-02-09 Thread via GitHub


lxliyou001 commented on PR #24279:
URL: https://github.com/apache/flink/pull/24279#issuecomment-1936858432

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on PR #22951:
URL: https://github.com/apache/flink/pull/22951#issuecomment-1936835154

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34094] Adds documentation for AsyncScalarFunction [flink]

2024-02-09 Thread via GitHub


AlanConfluent commented on PR #24224:
URL: https://github.com/apache/flink/pull/24224#issuecomment-1936807100

   @MartijnVisser I updated the documentation.  Please tell me if anything else 
is required to make the deadline.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34094] Adds documentation for AsyncScalarFunction [flink]

2024-02-09 Thread via GitHub


AlanConfluent commented on code in PR #24224:
URL: https://github.com/apache/flink/pull/24224#discussion_r1484927246


##
docs/content/docs/dev/table/functions/udfs.md:
##
@@ -846,6 +847,119 @@ If you intend to implement or call functions in Python, 
please refer to the [Pyt
 
 {{< top >}}
 
+Asynchronous Scalar Functions
+
+
+A user-defined asynchronous scalar function maps zero, one, or multiple scalar 
values to a new scalar value, but does it asynchronously. Any data type listed 
in the [data types section]({{< ref "docs/dev/table/types" >}}) can be used as 
a parameter or return type of an evaluation method.
+
+In order to define an asynchronous scalar function, one has to extend the base 
class `AsyncScalarFunction` in `org.apache.flink.table.functions` and implement 
one or more evaluation methods named `eval(...)`.  The first argument must be a 
`CompletableFuture<...>` which is used to return the result, with subsequent 
arguments being the parameters passed to the function.
+
+The following example shows how to do work on a thread pool in the background, 
though any libraries exposing an async interface may be directly used to 
complete the `CompletableFuture` from a callback. See the [Implementation 
Guide](#implementation-guide) for more details.

Review Comment:
   I took a bunch of the introduction from that section.  I also adopted the 
diagram as well.
   
   I also added sections for each of those you mentioned and explained how it 
works, including citing specific configs and how they are used.
   
   It should make a lot of sense and be easy for new users to understand, I 
think.  If there are areas which require more detail, please tell me and I can 
add more.  I just didn't want to make it too verbose vs other UDF descriptions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34401][docs-zh] Translate "Flame Graphs" page into Chinese [flink]

2024-02-09 Thread via GitHub


lxliyou001 commented on PR #24279:
URL: https://github.com/apache/flink/pull/24279#issuecomment-1936765976

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34378) Minibatch join disrupted the original order of input records

2024-02-09 Thread Jeyhun Karimov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816252#comment-17816252
 ] 

Jeyhun Karimov commented on FLINK-34378:


Hi [~xuyangzhong] the ordering is different even with parallelism 1 because of 
{{Set}} in {{MiniBatch}} operator. IMO this is expected behavior.  

> Minibatch join disrupted the original order of input records
> 
>
> Key: FLINK-34378
> URL: https://issues.apache.org/jira/browse/FLINK-34378
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: xuyang
>Priority: Major
> Fix For: 1.19.0
>
>
> I'm not sure if it's a bug. The following case can re-produce this situation.
> {code:java}
> // add it in CalcITCase
> @Test
> def test(): Unit = {
>   env.setParallelism(1)
>   val rows = Seq(
> row(1, "1"),
> row(2, "2"),
> row(3, "3"),
> row(4, "4"),
> row(5, "5"),
> row(6, "6"),
> row(7, "7"),
> row(8, "8"))
>   val dataId = TestValuesTableFactory.registerData(rows)
>   val ddl =
> s"""
>|CREATE TABLE t1 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl)
>   val ddl2 =
> s"""
>|CREATE TABLE t2 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl2)
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, 
> Boolean.box(true))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
> Duration.ofSeconds(5))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))
>   println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())
>   tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
> }{code}
> Result
> {code:java}
> ++---+---+---+---+ 
> | op | a | b | a0| b0| 
> ++---+---+---+---+ 
> | +I | 3 | 3 | 3 | 3 | 
> | +I | 7 | 7 | 7 | 7 | 
> | +I | 2 | 2 | 2 | 2 | 
> | +I | 5 | 5 | 5 | 5 | 
> | +I | 1 | 1 | 1 | 1 | 
> | +I | 6 | 6 | 6 | 6 | 
> | +I | 4 | 4 | 4 | 4 | 
> | +I | 8 | 8 | 8 | 8 | 
> ++---+---+---+---+
> {code}
> When I do not use minibatch join, the result is :
> {code:java}
> ++---+---+++
> | op | a | b | a0 | b0 |
> ++---+---+++
> | +I | 1 | 1 |  1 |  1 |
> | +I | 2 | 2 |  2 |  2 |
> | +I | 3 | 3 |  3 |  3 |
> | +I | 4 | 4 |  4 |  4 |
> | +I | 5 | 5 |  5 |  5 |
> | +I | 6 | 6 |  6 |  6 |
> | +I | 7 | 7 |  7 |  7 |
> | +I | 8 | 8 |  8 |  8 |
> ++---+---+++
>  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34129) MiniBatchGlobalGroupAggFunction will make -D as +I then make +I as -U when state expired

2024-02-09 Thread Jeyhun Karimov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816250#comment-17816250
 ] 

Jeyhun Karimov commented on FLINK-34129:


Hi [~loserwang1024],  [~xuyangzhong] I am not sure if it a bug or expected 
behaviour in local-global aggregation. 

Partitioned aggregates (see {{GroupAggFunction::processElement}}) solve the 
above-mentioned issue by tracking the {{firstRow}} and avoid sending the first 
row to {{retract}} function. In this case, since the state partitioned and 
there is only one operator instance responsible for the partition, we can avoid 
the above mentioned behaviour. 

In the presence of local-global aggregates,  however:
- it is difficult to prevent the above-mentioned behaviour in 
{{LocalGroupAggFunction}} instances, since there can be multiple of 
{{LocalGroupAggFunction}} instances, and there is no ordering among them ( to 
track {{firstRow}} and to avoid it being retracted)
- it is difficult to prefent the above-mentioned behaviour in 
{{GlobalGroupAggFunction}} instances, since it already receives pre-aggregated 
data. 

Currently, the only way to avoid this behavior is to either

- Use the {{firstRow}} tracking (similar to 
{{GroupAggFunction::processElement}}) in {{LocalGroupAggFunction}} AND use 
parallelism 1
- Use the partitioned aggregates

> MiniBatchGlobalGroupAggFunction will make -D as +I then make +I as -U when 
> state expired 
> -
>
> Key: FLINK-34129
> URL: https://issues.apache.org/jira/browse/FLINK-34129
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.1
>Reporter: Hongshun Wang
>Priority: Major
> Fix For: 1.19.0
>
>
> Take sum for example:
> When state is expired, then an update operation from source happens. 
> MiniBatchGlobalGroupAggFunction take -U[1, 20] and +U[1, 20] as input, but 
> will emit +I[1, -20] and -D[1, -20]. The sink will detele the data from 
> external database.
> Let's see why this will happens:
>  * when state is expired and -U[1, 20] arrive, 
> MiniBatchGlobalGroupAggFunction will create a new sum accumulator and set 
> firstRow as true.
> {code:java}
> if (stateAcc == null) { 
>     stateAcc = globalAgg.createAccumulators(); 
>     firstRow = true; 
> }   {code}
>  * then sum accumulator will retract sum value as -20
>  * As the first row, MiniBatchGlobalGroupAggFunction will change -U as +I, 
> then emit to downstream.
> {code:java}
> if (!recordCounter.recordCountIsZero(acc)) {
>    // if this was not the first row and we have to emit retractions
>     if (!firstRow) {
>        // ignore
>     } else {
>     // update acc to state
>     accState.update(acc);
>  
>    // this is the first, output new result
>    // prepare INSERT message for new row
>    resultRow.replace(currentKey, newAggValue).setRowKind(RowKind.INSERT);
>    out.collect(resultRow);
> }  {code}
>  * when next +U[1, 20] arrives, sum accumulator will retract sum value as 0, 
> so RetractionRecordCounter#recordCountIsZero will return true. Because 
> firstRow = false now, will change the +U as -D, then emit to downtream.
> {code:java}
> if (!recordCounter.recordCountIsZero(acc)) {
>     // ignode
> }else{
>    // we retracted the last record for this key
>    // if this is not first row sent out a DELETE message
>    if (!firstRow) {
>    // prepare DELETE message for previous row
>    resultRow.replace(currentKey, prevAggValue).setRowKind(RowKind.DELETE);
>    out.collect(resultRow);
> } {code}
>  
> So the sink will receiver +I and -D after a source update operation, the data 
> will be delete.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34407) Flaky tests causing workflow timeout

2024-02-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34407:
---
Labels: pull-request-available  (was: )

> Flaky tests causing workflow timeout
> 
>
> Key: FLINK-34407
> URL: https://issues.apache.org/jira/browse/FLINK-34407
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: aws-connector-4.2.0
>Reporter: Aleksandr Pilipenko
>Assignee: Aleksandr Pilipenko
>Priority: Major
>  Labels: pull-request-available
>
> Example build: 
> [https://github.com/apache/flink-connector-aws/actions/runs/7735404733]
> Tests are stuck retrying due to the following exception:
> {code:java}
> 797445 [main] WARN  
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutRecordPublisher
>  [] - Encountered recoverable error TimeoutException. Backing off for 0 
> millis 00 (arn)
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutShardSubscriber$RecoverableFanOutSubscriberException:
>  java.util.concurrent.TimeoutException: Timed out acquiring subscription - 
> 00 (arn)
>   at 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutShardSubscriber.handleErrorAndRethrow(FanOutShardSubscriber.java:327)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutShardSubscriber.openSubscriptionToShard(FanOutShardSubscriber.java:283)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutShardSubscriber.subscribeToShardAndConsumeRecords(FanOutShardSubscriber.java:210)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutRecordPublisher.runWithBackoff(FanOutRecordPublisher.java:177)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutRecordPublisher.run(FanOutRecordPublisher.java:130)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutRecordPublisherTest.testCancelExitsGracefully(FanOutRecordPublisherTest.java:595)
>  ~[test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_402]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34407][Connectors/Kinesis] Fix unstable test [flink-connector-aws]

2024-02-09 Thread via GitHub


z3d1k opened a new pull request, #128:
URL: https://github.com/apache/flink-connector-aws/pull/128

   ## Purpose of the change
   
   Tests that use `FakeKinesisFanOutBehavioursFactory` occasionally run into 
issue while creating mock for Kinesis shard subscription:
   ```
   java.lang.ClassCastException: 
org.mockito.codegen.Subscription$MockitoMock$1959866534 cannot be cast to 
org.reactivestreams.Subscription
   ```
   This failure leads to `TimeoutException` in `FanOutShardSubscriber`, 
resulting in workflow timing out due to continuous retries.
   
   Using fake subscription implementation instead of Mockito mock to resolve 
this issue.
   
   ## Verifying this change
   
   This change is already covered by existing tests and can be verified by 
running project build in Github Actions.
   
   ## Significant changes
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26088][Connectors/ElasticSearch] Add Elasticsearch 8.0 support [flink-connector-elasticsearch]

2024-02-09 Thread via GitHub


mtfelisb commented on code in PR #53:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/53#discussion_r1484848274


##
flink-connector-elasticsearch8/src/test/java/org/apache/flink/connector/elasticsearch/sink/Elasticsearch8AsyncWriterITCase.java:
##
@@ -0,0 +1,276 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ *
+ */
+
+package org.apache.flink.connector.elasticsearch.sink;
+
+import org.apache.flink.connector.base.sink.writer.TestSinkInitContext;
+import 
org.apache.flink.connector.elasticsearch.sink.Elasticsearch8AsyncSinkITCase.DummyData;
+import org.apache.flink.metrics.Gauge;
+
+import co.elastic.clients.elasticsearch.core.bulk.IndexOperation;
+import co.elastic.clients.elasticsearch.core.bulk.UpdateOperation;
+import org.apache.http.HttpHost;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.Timeout;
+import org.testcontainers.junit.jupiter.Testcontainers;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.locks.Condition;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+import java.util.function.Consumer;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Integration tests for {@link Elasticsearch8AsyncWriter}. */
+@Testcontainers
+public class Elasticsearch8AsyncWriterITCase extends 
ElasticsearchSinkBaseITCase {
+private TestSinkInitContext context;
+
+private final Lock lock = new ReentrantLock();
+
+private final Condition completed = lock.newCondition();
+
+@BeforeEach
+void setUp() {
+this.context = new TestSinkInitContext();
+this.client = getRestClient();
+}
+
+@AfterEach
+void shutdown() throws IOException {
+if (client != null) {
+client.close();
+}
+}
+
+@Test
+@Timeout(5)
+public void testBulkOnFlush() throws IOException, InterruptedException {
+String index = "test-bulk-on-flush";
+int maxBatchSize = 2;
+
+try (final Elasticsearch8AsyncWriter writer =
+createWriter(index, maxBatchSize)) {
+writer.write(new DummyData("test-1", "test-1"), null);
+writer.write(new DummyData("test-2", "test-2"), null);
+
+writer.flush(false);
+assertIdsAreWritten(index, new String[] {"test-1", "test-2"});
+
+writer.write(new DummyData("3", "test-3"), null);
+
+writer.flush(true);
+assertIdsAreWritten(index, new String[] {"test-3"});
+}
+}
+
+@Test
+@Timeout(5)
+public void testBulkOnBufferTimeFlush() throws Exception {
+String index = "test-bulk-on-time-in-buffer";
+int maxBatchSize = 3;
+
+try (final Elasticsearch8AsyncWriter writer =
+createWriter(index, maxBatchSize)) {
+writer.write(new DummyData("test-1", "test-1"), null);
+writer.flush(true);
+
+assertIdsAreWritten(index, new String[] {"test-1"});
+
+writer.write(new DummyData("test-2", "test-2"), null);
+writer.write(new DummyData("test-3", "test-3"), null);
+
+assertIdsAreNotWritten(index, new String[] {"test-2", "test-3"});
+context.getTestProcessingTimeService().advance(6000L);
+
+await();
+}
+
+assertIdsAreWritten(index, new String[] {"test-2", "test-3"});
+}
+
+@Test
+@Timeout(5)
+public void testBytesSentMetric() throws Exception {
+String index = "test-bytes-sent-metrics";
+int maxBatchSize = 3;
+
+try (final Elasticsearch8AsyncWriter writer =
+createWriter(index, maxBatchSize)) {
+
assertThat(context.getNumBytesOutCounter().getCount()).isEqualTo(0);
+
+writer.write(new DummyData("test-1", "test-1"), null);
+writer.write(new DummyData("test-2", "test-2"), null);
+writer.write(new DummyData("test-3", "test-3"), null);
+
+await();
+}
+
+
assertThat(context.getNumBytesOutCounter().getC

Re: [PR] [FLINK-34417] Log Job ID via MDC [flink]

2024-02-09 Thread via GitHub


rkhachatryan commented on PR #24292:
URL: https://github.com/apache/flink/pull/24292#issuecomment-1936575490

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33402] 2 CONTINUED [flink]

2024-02-09 Thread via GitHub


varun1729DD commented on PR #24055:
URL: https://github.com/apache/flink/pull/24055#issuecomment-1936474945

   @tweise requesting your review on this. I added additional details above ^


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-28693][table] Fix janino compile failed because the code generated refers the class in table-planner [flink]

2024-02-09 Thread via GitHub


seb-pereira commented on PR #24280:
URL: https://github.com/apache/flink/pull/24280#issuecomment-1936306609

   I have tested this change on 1.18.0 in same conditions as reported in 
[FLINK-28693](https://issues.apache.org/jira/browse/FLINK-28693?focusedCommentId=17815957&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17815957)
 and it fixes the issue. Job deploys, consumes and process Kafka messages using 
the UDF function as expected.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34417] Log Job ID via MDC [flink]

2024-02-09 Thread via GitHub


rkhachatryan commented on PR #24292:
URL: https://github.com/apache/flink/pull/24292#issuecomment-1936234814

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33936][table] Outputting Identical Results in Mini-Batch Aggregation with Set TTL [flink]

2024-02-09 Thread via GitHub


hackergin commented on PR #24290:
URL: https://github.com/apache/flink/pull/24290#issuecomment-1936224206

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484533063


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArraySortFunction.java:
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.lang.invoke.MethodHandle;
+import java.util.Arrays;
+import java.util.Comparator;
+
+import static org.apache.flink.table.api.Expressions.$;
+
+/** Implementation of ARRAY_SORT function. */
+@Internal
+public class ArraySortFunction extends BuiltInScalarFunction {
+
+private final ArrayData.ElementGetter elementGetter;
+private final SpecializedFunction.ExpressionEvaluator greaterEvaluator;
+
+private transient MethodHandle greaterHandle;
+
+public ArraySortFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.ARRAY_SORT, context);
+final DataType elementDataType =
+((CollectionDataType) 
context.getCallContext().getArgumentDataTypes().get(0))
+.getElementDataType();
+elementGetter = 
ArrayData.createElementGetter(elementDataType.getLogicalType());
+greaterEvaluator =
+context.createEvaluator(
+$("element1").isGreater($("element2")),
+DataTypes.BOOLEAN(),
+DataTypes.FIELD("element1", 
elementDataType.notNull().toInternal()),
+DataTypes.FIELD("element2", 
elementDataType.notNull().toInternal()));
+}
+
+@Override
+public void open(FunctionContext context) throws Exception {
+greaterHandle = greaterEvaluator.open(context);
+}
+
+public @Nullable ArrayData eval(ArrayData array, Boolean... 
ascendingOrder) {
+try {
+if (array == null || ascendingOrder.length > 0 && 
ascendingOrder[0] == null) {
+return null;
+}
+if (array.size() == 0) {
+return array;
+}
+boolean isAscending = ascendingOrder.length > 0 ? 
ascendingOrder[0] : true;
+Object[] elements = new Object[array.size()];
+for (int i = 0; i < array.size(); i++) {
+elements[i] = elementGetter.getElementOrNull(array, i);
+}
+if (isAscending) {
+Comparator ascendingComparator = new 
MyComparator(true);
+Arrays.sort(elements, ascendingComparator);
+} else {
+Comparator ascendingComparator = new 
MyComparator(false);
+Arrays.sort(elements, ascendingComparator);
+}
+return new GenericArrayData(elements);
+} catch (Throwable t) {
+throw new FlinkRuntimeException(t);
+}
+}
+
+private class MyComparator implements Comparator {
+private final boolean isAscending;
+
+public MyComparator(boolean isAscending) {
+this.isAscending = isAscending;
+}
+
+@Override
+public int compare(Object o1, Object o2) {
+try {
+if (isAscending) {
+if (o1 == null) {
+return -1;
+}
+if (o2 == null) {
+return -1;
+}
+boolean found = (boolean) greaterHandle.invoke(o1, o2);

Review Comment:
   Ok, I will update it.



-- 
This is an automated message from the Apache Git Service.
To respond to 

Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484532480


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArraySortFunction.java:
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.lang.invoke.MethodHandle;
+import java.util.Arrays;
+import java.util.Comparator;
+
+import static org.apache.flink.table.api.Expressions.$;
+
+/** Implementation of ARRAY_SORT function. */
+@Internal
+public class ArraySortFunction extends BuiltInScalarFunction {
+
+private final ArrayData.ElementGetter elementGetter;
+private final SpecializedFunction.ExpressionEvaluator greaterEvaluator;
+
+private transient MethodHandle greaterHandle;
+
+public ArraySortFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.ARRAY_SORT, context);
+final DataType elementDataType =
+((CollectionDataType) 
context.getCallContext().getArgumentDataTypes().get(0))
+.getElementDataType();
+elementGetter = 
ArrayData.createElementGetter(elementDataType.getLogicalType());
+greaterEvaluator =
+context.createEvaluator(
+$("element1").isGreater($("element2")),
+DataTypes.BOOLEAN(),
+DataTypes.FIELD("element1", 
elementDataType.notNull().toInternal()),
+DataTypes.FIELD("element2", 
elementDataType.notNull().toInternal()));
+}
+
+@Override
+public void open(FunctionContext context) throws Exception {
+greaterHandle = greaterEvaluator.open(context);
+}
+
+public @Nullable ArrayData eval(ArrayData array, Boolean... 
ascendingOrder) {
+try {
+if (array == null || ascendingOrder.length > 0 && 
ascendingOrder[0] == null) {
+return null;
+}
+if (array.size() == 0) {
+return array;
+}
+boolean isAscending = ascendingOrder.length > 0 ? 
ascendingOrder[0] : true;
+Object[] elements = new Object[array.size()];
+for (int i = 0; i < array.size(); i++) {
+elements[i] = elementGetter.getElementOrNull(array, i);
+}
+if (isAscending) {
+Comparator ascendingComparator = new 
MyComparator(true);
+Arrays.sort(elements, ascendingComparator);
+} else {
+Comparator ascendingComparator = new 
MyComparator(false);
+Arrays.sort(elements, ascendingComparator);
+}

Review Comment:
   Ok, thanks for your suggestion.



##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArraySortFunction.java:
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF

Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484532093


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArraySortFunction.java:
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.lang.invoke.MethodHandle;
+import java.util.Arrays;
+import java.util.Comparator;
+
+import static org.apache.flink.table.api.Expressions.$;
+
+/** Implementation of ARRAY_SORT function. */
+@Internal
+public class ArraySortFunction extends BuiltInScalarFunction {
+
+private final ArrayData.ElementGetter elementGetter;
+private final SpecializedFunction.ExpressionEvaluator greaterEvaluator;
+
+private transient MethodHandle greaterHandle;
+
+public ArraySortFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.ARRAY_SORT, context);
+final DataType elementDataType =
+((CollectionDataType) 
context.getCallContext().getArgumentDataTypes().get(0))
+.getElementDataType();
+elementGetter = 
ArrayData.createElementGetter(elementDataType.getLogicalType());

Review Comment:
   Ok, I will update it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484526273


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArraySortFunction.java:
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.lang.invoke.MethodHandle;
+import java.util.Arrays;
+import java.util.Comparator;
+
+import static org.apache.flink.table.api.Expressions.$;
+
+/** Implementation of ARRAY_SORT function. */
+@Internal
+public class ArraySortFunction extends BuiltInScalarFunction {
+
+private final ArrayData.ElementGetter elementGetter;
+private final SpecializedFunction.ExpressionEvaluator greaterEvaluator;
+
+private transient MethodHandle greaterHandle;
+
+public ArraySortFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.ARRAY_SORT, context);
+final DataType elementDataType =
+((CollectionDataType) 
context.getCallContext().getArgumentDataTypes().get(0))
+.getElementDataType();
+elementGetter = 
ArrayData.createElementGetter(elementDataType.getLogicalType());
+greaterEvaluator =
+context.createEvaluator(
+$("element1").isGreater($("element2")),
+DataTypes.BOOLEAN(),
+DataTypes.FIELD("element1", 
elementDataType.notNull().toInternal()),
+DataTypes.FIELD("element2", 
elementDataType.notNull().toInternal()));
+}
+
+@Override
+public void open(FunctionContext context) throws Exception {
+greaterHandle = greaterEvaluator.open(context);
+}
+
+public @Nullable ArrayData eval(ArrayData array, Boolean... 
ascendingOrder) {
+try {
+if (array == null || ascendingOrder.length > 0 && 
ascendingOrder[0] == null) {
+return null;
+}

Review Comment:
   Oh, I see the description in the docs/data/sql_functions.yml is different 
than https://github.com/apache/flink/pull/22951#issue-1788425522
   ```
   Returns:
   if the array is null or ascendingOrder is null return null. if array is 
empty return empty.
   ```
   And  I agree with you, we should not allow second argument NULL, I will 
update it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484526273


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArraySortFunction.java:
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.lang.invoke.MethodHandle;
+import java.util.Arrays;
+import java.util.Comparator;
+
+import static org.apache.flink.table.api.Expressions.$;
+
+/** Implementation of ARRAY_SORT function. */
+@Internal
+public class ArraySortFunction extends BuiltInScalarFunction {
+
+private final ArrayData.ElementGetter elementGetter;
+private final SpecializedFunction.ExpressionEvaluator greaterEvaluator;
+
+private transient MethodHandle greaterHandle;
+
+public ArraySortFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.ARRAY_SORT, context);
+final DataType elementDataType =
+((CollectionDataType) 
context.getCallContext().getArgumentDataTypes().get(0))
+.getElementDataType();
+elementGetter = 
ArrayData.createElementGetter(elementDataType.getLogicalType());
+greaterEvaluator =
+context.createEvaluator(
+$("element1").isGreater($("element2")),
+DataTypes.BOOLEAN(),
+DataTypes.FIELD("element1", 
elementDataType.notNull().toInternal()),
+DataTypes.FIELD("element2", 
elementDataType.notNull().toInternal()));
+}
+
+@Override
+public void open(FunctionContext context) throws Exception {
+greaterHandle = greaterEvaluator.open(context);
+}
+
+public @Nullable ArrayData eval(ArrayData array, Boolean... 
ascendingOrder) {
+try {
+if (array == null || ascendingOrder.length > 0 && 
ascendingOrder[0] == null) {
+return null;
+}

Review Comment:
   Yes, in  the description it said that 
   Returns:
   if the array is null or ascendingOrder is null return null. if array is 
empty return empty.
   And  I agree with you, we should not allow second argument NULL, I will 
update it.



##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArraySortFunction.java:
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.

Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484524404


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementArgumentTypeStrategy.java:
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentTypeStrategy;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.StructuredType;
+
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * An {@link ArgumentTypeStrategy} that checks if the input argument is an 
ARRAY type and check
+ * whether its' elements are comparable.
+ *
+ * It requires one argument.
+ *
+ * For the rules which types are comparable with which types see {@link
+ * #areComparable(LogicalType, LogicalType)}.
+ */
+public final class ArrayComparableElementArgumentTypeStrategy implements 
ArgumentTypeStrategy {

Review Comment:
   Ok, I will do that.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484523821


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(
+new 
ArrayComparableElementArgumentTypeStrategy(),
+logical(LogicalTypeRoot.BOOLEAN

Review Comment:
   Ok, I will check other RDNMS.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28747) "target_id can not be missing" in HTTP statefun request

2024-02-09 Thread Nathan Taylor Armstrong Lewis (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816124#comment-17816124
 ] 

Nathan Taylor Armstrong Lewis commented on FLINK-28747:
---

In Protobuf 3, there is an `optional` label. An unset field could then be 
distinguished from a field that was set to the default value.

Would adding {{optional}} to 
https://github.com/apache/flink-statefun/blob/accd75ea0109845c4b4c0ddd74021147af1439d4/statefun-sdk-protos/src/main/protobuf/io/kafka-egress.proto#L28
 be enough to provide the SDKs with a way to distinguish between a valid empty 
string key vs. an invalid unset key? I'm guessing there would have to be other 
changes elsewhere since that file is for the egress and I don't see any 
equivalent protobuf file for kafka ingress messages.

> "target_id can not be missing" in HTTP statefun request
> ---
>
> Key: FLINK-28747
> URL: https://issues.apache.org/jira/browse/FLINK-28747
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Affects Versions: statefun-3.0.0, statefun-3.2.0, statefun-3.1.1
>Reporter: Stephan Weinwurm
>Priority: Major
>
> Hi all,
> We've suddenly started to see the following exception in our HTTP statefun 
> functions endpoints:
> {code}Traceback (most recent call last):
>   File 
> "/src/.venv/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", 
> line 403, in run_asgi
> result = await app(self.scope, self.receive, self.send)
>   File 
> "/src/.venv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", 
> line 78, in __call__
> return await self.app(scope, receive, send)
>   File "/src/worker/baseplate_asgi/asgi/baseplate_asgi_middleware.py", line 
> 37, in __call__
> await span_processor.execute()
>   File "/src/worker/baseplate_asgi/asgi/asgi_http_span_processor.py", line 
> 61, in execute
> raise e
>   File "/src/worker/baseplate_asgi/asgi/asgi_http_span_processor.py", line 
> 57, in execute
> await self.app(self.scope, self.receive, self.send)
>   File "/src/.venv/lib/python3.9/site-packages/starlette/applications.py", 
> line 124, in __call__
> await self.middleware_stack(scope, receive, send)
>   File 
> "/src/.venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 
> 184, in __call__
> raise exc
>   File 
> "/src/.venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 
> 162, in __call__
> await self.app(scope, receive, _send)
>   File 
> "/src/.venv/lib/python3.9/site-packages/starlette/middleware/exceptions.py", 
> line 75, in __call__
> raise exc
>   File 
> "/src/.venv/lib/python3.9/site-packages/starlette/middleware/exceptions.py", 
> line 64, in __call__
> await self.app(scope, receive, sender)
>   File "/src/.venv/lib/python3.9/site-packages/starlette/routing.py", line 
> 680, in __call__
> await route.handle(scope, receive, send)
>   File "/src/.venv/lib/python3.9/site-packages/starlette/routing.py", line 
> 275, in handle
> await self.app(scope, receive, send)
>   File "/src/.venv/lib/python3.9/site-packages/starlette/routing.py", line 
> 65, in app
> response = await func(request)
>   File "/src/worker/baseplate_statefun/server/asgi/make_statefun_handler.py", 
> line 25, in statefun_handler
> result = await handler.handle_async(request_body)
>   File "/src/.venv/lib/python3.9/site-packages/statefun/request_reply_v3.py", 
> line 262, in handle_async
> msg = Message(target_typename=sdk_address.typename, 
> target_id=sdk_address.id,
>   File "/src/.venv/lib/python3.9/site-packages/statefun/messages.py", line 
> 42, in __init__
> raise ValueError("target_id can not be missing"){code}
> Interestingly, this has started to happen in three separate Flink deployments 
> at the very same time. The only thing in common between the three deployments 
> is that they consume the same Kafka topics.
> No deployments have happened when the issue started happening which was on 
> July 28th 3:05PM. We have since been continuously seeing the error.
> We were also able to extract the request that Flink sends to the HTTP 
> statefun endpoint:
> {code}{'invocation': {'target': {'namespace': 'com.x.dummy', 'type': 
> 'dummy'}, 'invocations': [{'argument': {'typename': 
> 'type.googleapis.com/v2_event.Event', 'has_value': True, 'value': 
> '-redicated-'}}]}}
> {code}
> As you can see, no `id` field is present in the `invocation.target` object or 
> the `target_id` was an empty string.
>  
> This is our module.yaml from one of the Flink deployments:
>  
> {code}
> version: "3.0"
> module:
> meta:
> type: remote
> spec:
> endpoints:
>  - endpoint:
> meta:
> kind: io.statefun.endpoints.v1/http
> spec:
> functions: com.x.dummy/dummy
> urlPathTemplate: [http://x-worker-dummy.x-fun

Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


dawidwys commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484478770


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(
+new 
ArrayComparableElementArgumentTypeStrategy(),
+logical(LogicalTypeRoot.BOOLEAN

Review Comment:
   You should update the output strategy then to take into account the second 
argument. So far you have only: `nullableIfArgs(argument(0))`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


dawidwys commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484479808


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(
+new 
ArrayComparableElementArgumentTypeStrategy(),
+logical(LogicalTypeRoot.BOOLEAN

Review Comment:
   Can you check what's the behaviour in other RDBMS? E.g. postgres, oracle, 
sql server?
   
   Personally I find the behaviour a bit weird we accept nullable type for the 
second argument.



##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(
+new 
ArrayComparableElementArgumentTypeStrategy(),
+logical(LogicalTypeRoot.BOOLEAN

Review Comment:
   Can you check what's the behaviour in other RDBMS? E.g. postgres, oracle, 
sql server?
   
   Personally I find the behaviour a bit weird we accept a nullable type for 
the second argument.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-21227) Upgrade Protobof 3.7.0 for (power)ppc64le support

2024-02-09 Thread Nathan Taylor Armstrong Lewis (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816116#comment-17816116
 ] 

Nathan Taylor Armstrong Lewis commented on FLINK-21227:
---

Is there any particular reason not to use the {{$\{protoc.version\}}} property 
that is defined in 
[https://github.com/apache/flink/blob/d2abd744621c6f0f65e7154a2c1b53bcaf78e90b/pom.xml#L161]
 to keep the version of protoc used consistent across the repo?

Specifically that might look something like changing 
[https://github.com/bivasda1/flink/blob/0d5ea7bccf8847b3fdc2049c381764b08dc895e9/flink-formats/flink-parquet/pom.xml#L253]
 to:
{code:java}
com.google.protobuf:protoc:${protoc.version}:exe:${os.detected.classifier}
{code}

I'm not familiar with parquet, so this might be a horrible idea due to 
something parquet specific.

> Upgrade Protobof 3.7.0 for (power)ppc64le support
> -
>
> Key: FLINK-21227
> URL: https://issues.apache.org/jira/browse/FLINK-21227
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Bivas
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> com.google.protobuf:*protoc:3.5.1:exe* was not supported by power. Later 
> versions released multi-arch support including power(ppc64le).Using 
> *protoc:3.7.0:exe* able to build and E2E tests passed successfully.
> https://github.com/bivasda1/flink/blob/master/flink-formats/flink-parquet/pom.xml#L253



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


dawidwys commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484473796


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArraySortFunction.java:
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.lang.invoke.MethodHandle;
+import java.util.Arrays;
+import java.util.Comparator;
+
+import static org.apache.flink.table.api.Expressions.$;
+
+/** Implementation of ARRAY_SORT function. */
+@Internal
+public class ArraySortFunction extends BuiltInScalarFunction {
+
+private final ArrayData.ElementGetter elementGetter;
+private final SpecializedFunction.ExpressionEvaluator greaterEvaluator;
+
+private transient MethodHandle greaterHandle;
+
+public ArraySortFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.ARRAY_SORT, context);
+final DataType elementDataType =
+((CollectionDataType) 
context.getCallContext().getArgumentDataTypes().get(0))
+.getElementDataType();
+elementGetter = 
ArrayData.createElementGetter(elementDataType.getLogicalType());
+greaterEvaluator =
+context.createEvaluator(
+$("element1").isGreater($("element2")),
+DataTypes.BOOLEAN(),
+DataTypes.FIELD("element1", 
elementDataType.notNull().toInternal()),
+DataTypes.FIELD("element2", 
elementDataType.notNull().toInternal()));
+}
+
+@Override
+public void open(FunctionContext context) throws Exception {
+greaterHandle = greaterEvaluator.open(context);
+}
+
+public @Nullable ArrayData eval(ArrayData array, Boolean... 
ascendingOrder) {
+try {
+if (array == null || ascendingOrder.length > 0 && 
ascendingOrder[0] == null) {
+return null;
+}

Review Comment:
   Where? Still can't see it:
   > Returns the array in sorted order. Sorts the input array in ascending or 
descending order according to the natural ordering of the array elements. NULL 
elements are placed at the beginning of the returned array in ascending order 
or at the end of the returned array in descending order. If the array itself is 
null, the function will return null. The optional ascendingOrder argument 
defaults to true if not specified.
   
   It says: output is a sorted array, which might be null if the input array is 
null.
   
   Do I read it wrong? The output type does not depend on the `ascendingOrder` 
from that description.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33958] Fix IntervalJoin restore test flakiness [flink]

2024-02-09 Thread via GitHub


bvarghese1 commented on PR #24196:
URL: https://github.com/apache/flink/pull/24196#issuecomment-1936127479

   > @bvarghese1 Could you please explain how does this improve the test 
reliability?
   Previously the input was:
   ```
   Row.of(7, 3, "2020-04-15 08:00:16"),
   Row.of(11, 7, "2020-04-15 08:00:11"),
   Row.of(13, 10, "2020-04-15 08:00:13")
   ```
   This would make the processing unreliable depending on when the watermark is 
processed. If its not timely then events like `Row.of(11, 7, "2020-04-15 
08:00:11")` gets processed otherwise they get discarded due to being late. To 
fix this I changed the order of the input to make it a bit more reliable for 
tests. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


dawidwys commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484467843


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementArgumentTypeStrategy.java:
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentTypeStrategy;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.StructuredType;
+
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * An {@link ArgumentTypeStrategy} that checks if the input argument is an 
ARRAY type and check
+ * whether its' elements are comparable.
+ *
+ * It requires one argument.
+ *
+ * For the rules which types are comparable with which types see {@link
+ * #areComparable(LogicalType, LogicalType)}.
+ */
+public final class ArrayComparableElementArgumentTypeStrategy implements 
ArgumentTypeStrategy {

Review Comment:
   Would it make sense to convert the old class to `ArgumentTypeStrategy`? 
Building an `InputTypeStrategy` from `ArgumentTypeStrategy` usually should be 
simple: e.g. `sequence(comparable())`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34282][docker] Updates snapshot workflow configuration [flink-docker]

2024-02-09 Thread via GitHub


XComp commented on code in PR #180:
URL: https://github.com/apache/flink-docker/pull/180#discussion_r1484449607


##
.github/workflows/snapshot.yml:
##
@@ -38,14 +38,14 @@ jobs:
   matrix:
 java_version: [8, 11]
 build:
-  - flink_version: 1.19-SNAPSHOT
+  - flink_version: 1.20-SNAPSHOT
 branch: dev-master
+  - flink_version: 1.19-SNAPSHOT
+branch: dev-1.19
   - flink_version: 1.18-SNAPSHOT
 branch: dev-1.18
   - flink_version: 1.17-SNAPSHOT
 branch: dev-1.17
-  - flink_version: 1.16-SNAPSHOT

Review Comment:
   @JingGe can you merge the PR after approving it? I'm gonna be offline for 
the rest of the day. But it would be nice to have the snapshot Docker images 
being created over the weekend already.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484427940


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementArgumentTypeStrategy.java:
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentTypeStrategy;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.StructuredType;
+
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * An {@link ArgumentTypeStrategy} that checks if the input argument is an 
ARRAY type and check
+ * whether its' elements are comparable.
+ *
+ * It requires one argument.
+ *
+ * For the rules which types are comparable with which types see {@link
+ * #areComparable(LogicalType, LogicalType)}.
+ */
+public final class ArrayComparableElementArgumentTypeStrategy implements 
ArgumentTypeStrategy {

Review Comment:
   Yes, 
https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategy.java
 
   is also write by myself, but that return type is InputTypeStrategy.
   We need ArgumentTypeStrategy here, so I generate a new class which return 
ArgumentTypeStrategy.
   https://github.com/apache/flink/pull/22951/files#r1484440158
   Maybe we can extract same code as an utils and reuse the code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484440158


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(

Review Comment:
   I use sequence here, so I need ArgumentTypeStrategy here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484427940


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementArgumentTypeStrategy.java:
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentTypeStrategy;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.StructuredType;
+
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * An {@link ArgumentTypeStrategy} that checks if the input argument is an 
ARRAY type and check
+ * whether its' elements are comparable.
+ *
+ * It requires one argument.
+ *
+ * For the rules which types are comparable with which types see {@link
+ * #areComparable(LogicalType, LogicalType)}.
+ */
+public final class ArrayComparableElementArgumentTypeStrategy implements 
ArgumentTypeStrategy {

Review Comment:
   Yes, 
https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategy.java
 
   is also write by myself, but that return type is InputTypeStrategy.
   We need ArgumentTypeStrategy here, so I generate a new class which return 
ArgumentTypeStrategy.
   Maybe we can extract same code as an utils and reuse the code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484430188


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArraySortFunction.java:
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.lang.invoke.MethodHandle;
+import java.util.Arrays;
+import java.util.Comparator;
+
+import static org.apache.flink.table.api.Expressions.$;
+
+/** Implementation of ARRAY_SORT function. */
+@Internal
+public class ArraySortFunction extends BuiltInScalarFunction {
+
+private final ArrayData.ElementGetter elementGetter;
+private final SpecializedFunction.ExpressionEvaluator greaterEvaluator;
+
+private transient MethodHandle greaterHandle;
+
+public ArraySortFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.ARRAY_SORT, context);
+final DataType elementDataType =
+((CollectionDataType) 
context.getCallContext().getArgumentDataTypes().get(0))
+.getElementDataType();
+elementGetter = 
ArrayData.createElementGetter(elementDataType.getLogicalType());
+greaterEvaluator =
+context.createEvaluator(
+$("element1").isGreater($("element2")),
+DataTypes.BOOLEAN(),
+DataTypes.FIELD("element1", 
elementDataType.notNull().toInternal()),
+DataTypes.FIELD("element2", 
elementDataType.notNull().toInternal()));
+}
+
+@Override
+public void open(FunctionContext context) throws Exception {
+greaterHandle = greaterEvaluator.open(context);
+}
+
+public @Nullable ArrayData eval(ArrayData array, Boolean... 
ascendingOrder) {
+try {
+if (array == null || ascendingOrder.length > 0 && 
ascendingOrder[0] == null) {
+return null;
+}

Review Comment:
   From the description, it seems that the output strategy depend on the second 
argument.
   https://github.com/apache/flink/pull/22951#issue-1788425522



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484427940


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementArgumentTypeStrategy.java:
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentTypeStrategy;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeFamily;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.StructuredType;
+
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * An {@link ArgumentTypeStrategy} that checks if the input argument is an 
ARRAY type and check
+ * whether its' elements are comparable.
+ *
+ * It requires one argument.
+ *
+ * For the rules which types are comparable with which types see {@link
+ * #areComparable(LogicalType, LogicalType)}.
+ */
+public final class ArrayComparableElementArgumentTypeStrategy implements 
ArgumentTypeStrategy {

Review Comment:
   Yes, 
https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategy.java
 
   is also write by myself, but that return type is InputTypeStrategy,
   we need ArgumentTypeStrategy here.
   Maybe we can extract same code as an utils and reuse the code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484422529


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(
+new 
ArrayComparableElementArgumentTypeStrategy(),
+logical(LogicalTypeRoot.BOOLEAN

Review Comment:
   Hi, @dawidwys , from the description, if the second argument is null, we 
will return NULL.
   `SELECT ARRAY_SLICE(['a', 'b', 'c', 'd', 'e'], null)
   null
   `
   
   https://github.com/apache/flink/pull/22951#issue-1788425522



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484422529


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(
+new 
ArrayComparableElementArgumentTypeStrategy(),
+logical(LogicalTypeRoot.BOOLEAN

Review Comment:
   Hi, @dawidwys , from the description, if second argument is null, we will 
return NUll.
   `SELECT ARRAY_SLICE(['a', 'b', 'c', 'd', 'e'], null)
   null
   `



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484422529


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(
+new 
ArrayComparableElementArgumentTypeStrategy(),
+logical(LogicalTypeRoot.BOOLEAN

Review Comment:
   Hi, @dawidwys , from the description, if the second argument is null, we 
will return NULL.
   `SELECT ARRAY_SLICE(['a', 'b', 'c', 'd', 'e'], null)
   null
   `



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34282][docker] Updates snapshot workflow configuration [flink-docker]

2024-02-09 Thread via GitHub


XComp commented on code in PR #180:
URL: https://github.com/apache/flink-docker/pull/180#discussion_r1484401070


##
.github/workflows/snapshot.yml:
##
@@ -38,14 +38,14 @@ jobs:
   matrix:
 java_version: [8, 11]
 build:
-  - flink_version: 1.19-SNAPSHOT
+  - flink_version: 1.20-SNAPSHOT
 branch: dev-master
+  - flink_version: 1.19-SNAPSHOT
+branch: dev-1.19
   - flink_version: 1.18-SNAPSHOT
 branch: dev-1.18
   - flink_version: 1.17-SNAPSHOT
 branch: dev-1.17
-  - flink_version: 1.16-SNAPSHOT

Review Comment:
   I updated the PR to include this minor README change in a hotfix PR :+1: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Fix typo on schema_evolution.md [flink]

2024-02-09 Thread via GitHub


rafaelzimmermann closed pull request #22943: Fix typo on schema_evolution.md
URL: https://github.com/apache/flink/pull/22943


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34239) Introduce a deep copy method of SerializerConfig for merging with Table configs in org.apache.flink.table.catalog.DataTypeFactoryImpl

2024-02-09 Thread Kumar Mallikarjuna (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816077#comment-17816077
 ] 

Kumar Mallikarjuna commented on FLINK-34239:


Thank you, [~zjureel] !

> Introduce a deep copy method of SerializerConfig for merging with Table 
> configs in org.apache.flink.table.catalog.DataTypeFactoryImpl 
> --
>
> Key: FLINK-34239
> URL: https://issues.apache.org/jira/browse/FLINK-34239
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Zhanghao Chen
>Assignee: Kumar Mallikarjuna
>Priority: Major
>
> *Problem*
> Currently, 
> org.apache.flink.table.catalog.DataTypeFactoryImpl#createSerializerExecutionConfig
>  will create a deep-copy of the SerializerConfig and merge Table config into 
> it. However, the deep copy is done by manully calling the getter and setter 
> methods of SerializerConfig, and is prone to human errors, e.g. missing 
> copying a newly added field in SerializerConfig.
> *Proposal*
> Introduce a deep copy method for SerializerConfig and replace the curr impl 
> in 
> org.apache.flink.table.catalog.DataTypeFactoryImpl#createSerializerExecutionConfig.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix] Bumps Flink snapshot version and related flink-shaded version [flink-benchmarks]

2024-02-09 Thread via GitHub


XComp closed pull request #86: [hotfix] Bumps Flink snapshot version and 
related flink-shaded version
URL: https://github.com/apache/flink-benchmarks/pull/86


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34401][docs-zh] Translate "Flame Graphs" page into Chinese [flink]

2024-02-09 Thread via GitHub


lxliyou001 commented on PR #24279:
URL: https://github.com/apache/flink/pull/24279#issuecomment-1935962036

   Hi @JingGe , thank you for your advice, it is my first time participating in 
the Flink project. i have added 2 commits with your suggestion, Please review 
it when you have free time, thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34417] Log Job ID via MDC [flink]

2024-02-09 Thread via GitHub


rkhachatryan commented on PR #24292:
URL: https://github.com/apache/flink/pull/24292#issuecomment-1935958746

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34401][docs-zh] Translate "Flame Graphs" page into Chinese [flink]

2024-02-09 Thread via GitHub


lxliyou001 commented on PR #24279:
URL: https://github.com/apache/flink/pull/24279#issuecomment-1935949613

   > Thanks @lxliyou001 for your contribution. I just left some comments.
   Hi @JingGe , thank you for your advice, it is my first time participating in 
the Flink project. i have added 2 commits with your suggestion, Please review 
it when you have free time, thank you.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34401][docs-zh] Translate "Flame Graphs" page into Chinese [flink]

2024-02-09 Thread via GitHub


lxliyou001 commented on PR #24279:
URL: https://github.com/apache/flink/pull/24279#issuecomment-1935945569

   Hi @JingGe , thank you for your advice, it is my first time participating in 
the Flink project. i have  added 2 commits with your suggestion, Please review 
it when you have free time, thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Bumps Flink snapshot version and related flink-shaded version [flink-benchmarks]

2024-02-09 Thread via GitHub


snuyanzin commented on code in PR #86:
URL: https://github.com/apache/flink-benchmarks/pull/86#discussion_r1484312133


##
pom.xml:
##
@@ -45,8 +45,8 @@ under the License.
 


UTF-8
-   1.19-SNAPSHOT
-   16.1
+   1.20-SNAPSHOT
+   17.0
2.0.54.Final

Review Comment:
   ```suggestion
2.0.59.Final
   ```
   I think this also should be updated as a result of FLINK-30772



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Bumps Flink snapshot version and related flink-shaded version [flink-benchmarks]

2024-02-09 Thread via GitHub


snuyanzin commented on code in PR #86:
URL: https://github.com/apache/flink-benchmarks/pull/86#discussion_r1484312133


##
pom.xml:
##
@@ -45,8 +45,8 @@ under the License.
 


UTF-8
-   1.19-SNAPSHOT
-   16.1
+   1.20-SNAPSHOT
+   17.0
2.0.54.Final

Review Comment:
   ```suggestion
2.0.59.Final
   ```
   I think it also should be updated as a result of FLINK-30772



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34386][state] Add RocksDB bloom filter metrics [flink]

2024-02-09 Thread via GitHub


JingGe commented on PR #24274:
URL: https://github.com/apache/flink/pull/24274#issuecomment-1935899938

   There some issue while building the hadoop image: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57409&view=logs&j=ef799394-2d67-5ff4-b2e5-410b80c9c0af&t=9e5768bc-daae-5f5f-1861-e58617922c7a&l=10543


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [HOTFIX][Doc] update readme to clarify that generate-stackbrew-library-docker.sh should be run on local machine directly. [flink-docker]

2024-02-09 Thread via GitHub


JingGe merged PR #163:
URL: https://github.com/apache/flink-docker/pull/163


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-09 Thread via GitHub


dawidwys commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1484177231


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArraySortFunction.java:
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.lang.invoke.MethodHandle;
+import java.util.Arrays;
+import java.util.Comparator;
+
+import static org.apache.flink.table.api.Expressions.$;
+
+/** Implementation of ARRAY_SORT function. */
+@Internal
+public class ArraySortFunction extends BuiltInScalarFunction {
+
+private final ArrayData.ElementGetter elementGetter;
+private final SpecializedFunction.ExpressionEvaluator greaterEvaluator;
+
+private transient MethodHandle greaterHandle;
+
+public ArraySortFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.ARRAY_SORT, context);
+final DataType elementDataType =
+((CollectionDataType) 
context.getCallContext().getArgumentDataTypes().get(0))
+.getElementDataType();
+elementGetter = 
ArrayData.createElementGetter(elementDataType.getLogicalType());
+greaterEvaluator =
+context.createEvaluator(
+$("element1").isGreater($("element2")),
+DataTypes.BOOLEAN(),
+DataTypes.FIELD("element1", 
elementDataType.notNull().toInternal()),
+DataTypes.FIELD("element2", 
elementDataType.notNull().toInternal()));
+}
+
+@Override
+public void open(FunctionContext context) throws Exception {
+greaterHandle = greaterEvaluator.open(context);
+}
+
+public @Nullable ArrayData eval(ArrayData array, Boolean... 
ascendingOrder) {
+try {
+if (array == null || ascendingOrder.length > 0 && 
ascendingOrder[0] == null) {
+return null;
+}

Review Comment:
   This is not covered by the output strategy, is it? The output strategy does 
not depend on the second argument.



##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArraySortFunction.java:
##
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.table.types.Collectio

Re: [PR] [FLINK-34282][docker] Updates snapshot workflow configuration [flink-docker]

2024-02-09 Thread via GitHub


JingGe commented on code in PR #180:
URL: https://github.com/apache/flink-docker/pull/180#discussion_r1484202025


##
.github/workflows/snapshot.yml:
##
@@ -38,14 +38,14 @@ jobs:
   matrix:
 java_version: [8, 11]
 build:
-  - flink_version: 1.19-SNAPSHOT
+  - flink_version: 1.20-SNAPSHOT
 branch: dev-master
+  - flink_version: 1.19-SNAPSHOT
+branch: dev-1.19
   - flink_version: 1.18-SNAPSHOT
 branch: dev-1.18
   - flink_version: 1.17-SNAPSHOT
 branch: dev-1.17
-  - flink_version: 1.16-SNAPSHOT

Review Comment:
   You should mean that we only maintain images of the latest three versions. I 
couldn't find it in the  
https://github.com/apache/flink-docker/blob/master/README.md you mentioned in 
the PR description. Does it make sense to update the PR description to make the 
intention of the PR a little bit more clear? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34282][docker] Merge commit 44f05828 (pre-FLINK-34282 dev-master) into dev-1.19 [flink-docker]

2024-02-09 Thread via GitHub


XComp closed pull request #179: [FLINK-34282][docker] Merge commit 44f05828 
(pre-FLINK-34282 dev-master) into dev-1.19
URL: https://github.com/apache/flink-docker/pull/179


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34282][docker] Merge commit 44f05828 (pre-FLINK-34282 dev-master) into dev-1.19 [flink-docker]

2024-02-09 Thread via GitHub


XComp commented on PR #179:
URL: https://github.com/apache/flink-docker/pull/179#issuecomment-1935753499

   This merge commit PR can be closed. I force-pushed a rebase to 
{{dev-master~1}} 
([44f05828](https://github.com/apache/flink-docker/commit/44f058287cc956a620b12b6f8ed214e44dc3db77))
 after a brief discussion in FLINK-34282


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34282) Create a release branch

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816022#comment-17816022
 ] 

Matthias Pohl commented on FLINK-34282:
---

I rebased {{dev-1.19}} to {{dev-master~1}} 
([44f05828|https://github.com/apache/flink-docker/commit/44f058287cc956a620b12b6f8ed214e44dc3db77])

There's another open [PR #180|https://github.com/apache/flink-docker/pull/180] 
to "fix" the {{master}} branch.

> Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> Additionally in master, update the branch list of the GitHub Actions nightly 
> workflow (see 
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]):
>  The two most-recent releases and master should be covered.
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * {{master}} branch has the version information updated to the new version 
> (check pom.xml files and 
>  * 
> [apache-flink:flink-annotations/src/main/java/org/apache

[jira] [Updated] (FLINK-33163) Support Java 21 (LTS)

2024-02-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-33163:
--
Fix Version/s: (was: 1.19.0)

> Support Java 21 (LTS)
> -
>
> Key: FLINK-33163
> URL: https://issues.apache.org/jira/browse/FLINK-33163
> Project: Flink
>  Issue Type: New Feature
>Reporter: Maciej Bryński
>Priority: Major
>
> Based on https://issues.apache.org/jira/browse/FLINK-15736 we should have 
> similar ticket for Java 21 LTS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34282][docker] Updates snapshot workflow configuration [flink-docker]

2024-02-09 Thread via GitHub


XComp opened a new pull request, #180:
URL: https://github.com/apache/flink-docker/pull/180

   This way we are aligned with what's described in the 
[README.md](https://github.com/apache/flink-docker/blob/master/README.md#development-workflow)
 again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34419) flink-docker's .github/workflows/snapshot.yml doesn't support JDK 17 and 21

2024-02-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34419:
--
Labels: starter  (was: )

> flink-docker's .github/workflows/snapshot.yml doesn't support JDK 17 and 21
> ---
>
> Key: FLINK-34419
> URL: https://issues.apache.org/jira/browse/FLINK-34419
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System / CI
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: starter
>
> [.github/workflows/snapshot.yml|https://github.com/apache/flink-docker/blob/master/.github/workflows/snapshot.yml#L40]
>  needs to be updated: JDK 17 support was added in 1.18 (FLINK-15736). JDK 21 
> support was added in 1.19 (FLINK-33163)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33163) Support Java 21 (LTS)

2024-02-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-33163:
--
Fix Version/s: 1.19.0

> Support Java 21 (LTS)
> -
>
> Key: FLINK-33163
> URL: https://issues.apache.org/jira/browse/FLINK-33163
> Project: Flink
>  Issue Type: New Feature
>Reporter: Maciej Bryński
>Priority: Major
> Fix For: 1.19.0
>
>
> Based on https://issues.apache.org/jira/browse/FLINK-15736 we should have 
> similar ticket for Java 21 LTS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34419) flink-docker's .github/workflows/snapshot.yml doesn't support JDK 17 and 21

2024-02-09 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34419:
-

 Summary: flink-docker's .github/workflows/snapshot.yml doesn't 
support JDK 17 and 21
 Key: FLINK-34419
 URL: https://issues.apache.org/jira/browse/FLINK-34419
 Project: Flink
  Issue Type: Technical Debt
  Components: Build System / CI
Reporter: Matthias Pohl


[.github/workflows/snapshot.yml|https://github.com/apache/flink-docker/blob/master/.github/workflows/snapshot.yml#L40]
 needs to be updated: JDK 17 support was added in 1.18 (FLINK-15736). JDK 21 
support was added in 1.19 (FLINK-33163)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34152] Tune heap memory of autoscaled jobs [flink-kubernetes-operator]

2024-02-09 Thread via GitHub


mateczagany commented on code in PR #762:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/762#discussion_r1484178999


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/JobAutoScalerImpl.java:
##
@@ -106,7 +107,13 @@ public void scale(Context ctx) throws Exception {
 } catch (Throwable e) {
 onError(ctx, autoscalerMetrics, e);
 } finally {
-applyParallelismOverrides(ctx);
+try {
+applyParallelismOverrides(ctx);
+applyConfigOverrides(ctx);

Review Comment:
   Sorry, it's not related to this change at all, and it's completely 
intermittent. I seem to be running into FLINK-32334 and the TaskManagers (thus 
the Kubernetes Deployment as well) are not properly deleted in native mode 
before starting the new deployment. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34282) Create a release branch

2024-02-09 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816018#comment-17816018
 ] 

Jing Ge commented on FLINK-34282:
-

[~mapohl] please go ahead, thanks!

> Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> Additionally in master, update the branch list of the GitHub Actions nightly 
> workflow (see 
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]):
>  The two most-recent releases and master should be covered.
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * {{master}} branch has the version information updated to the new version 
> (check pom.xml files and 
>  * 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum)
>  *  
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/

Re: [PR] feat: remove scala dependency [flink-connector-cassandra]

2024-02-09 Thread via GitHub


boring-cyborg[bot] commented on PR #27:
URL: 
https://github.com/apache/flink-connector-cassandra/pull/27#issuecomment-1935719123

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34282) Create a release branch

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816016#comment-17816016
 ] 

Matthias Pohl commented on FLINK-34282:
---

correct, I will go ahead and do that then.

> Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> Additionally in master, update the branch list of the GitHub Actions nightly 
> workflow (see 
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]):
>  The two most-recent releases and master should be covered.
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * {{master}} branch has the version information updated to the new version 
> (check pom.xml files and 
>  * 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum)
>  *  
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https:

Re: [PR] feat: remove scala dependency [flink-connector-cassandra]

2024-02-09 Thread via GitHub


yuranye closed pull request #27: feat: remove scala dependency
URL: https://github.com/apache/flink-connector-cassandra/pull/27


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34282) Create a release branch

2024-02-09 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816015#comment-17816015
 ] 

Martijn Visser commented on FLINK-34282:


[~mapohl] You mean force-pushing `dev-1.19` based on `dev-master` on 
flink/docker? I don't think that's an issue at all. 

> Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> Additionally in master, update the branch list of the GitHub Actions nightly 
> workflow (see 
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]):
>  The two most-recent releases and master should be covered.
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * {{master}} branch has the version information updated to the new version 
> (check pom.xml files and 
>  * 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]

[jira] [Commented] (FLINK-34403) VeryBigPbProtoToRowTest#testSimple cannot pass due to OOM

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816011#comment-17816011
 ] 

Matthias Pohl commented on FLINK-34403:
---

[~dsaisharath] what's a heap size that would allow the test to pass?

> VeryBigPbProtoToRowTest#testSimple cannot pass due to OOM
> -
>
> Key: FLINK-34403
> URL: https://issues.apache.org/jira/browse/FLINK-34403
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.20.0
>Reporter: Benchao Li
>Priority: Critical
>  Labels: test-stability
>
> After FLINK-33611 merged, the misc test on GHA cannot pass due to out of 
> memory error, throwing following exceptions:
> {code:java}
> Error: 05:43:21 05:43:21.768 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 40.98 s <<< FAILURE! -- in 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest
> Error: 05:43:21 05:43:21.773 [ERROR] 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple -- Time 
> elapsed: 40.97 s <<< ERROR!
> Feb 07 05:43:21 org.apache.flink.util.FlinkRuntimeException: Error in 
> serialization.
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:327)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:162)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:1007)
> Feb 07 05:43:21   at 
> org.apache.flink.client.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:56)
> Feb 07 05:43:21   at 
> org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:45)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.PipelineExecutorUtils.getJobGraph(PipelineExecutorUtils.java:61)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.getJobGraph(LocalExecutor.java:104)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:81)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2440)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2421)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollectWithClient(DataStream.java:1495)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1382)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1367)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.validateRow(ProtobufTestHelper.java:66)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:89)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:76)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:71)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple(VeryBigPbRowToProtoTest.java:37)
> Feb 07 05:43:21   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 07 05:43:21 Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IllegalArgumentException: Self-suppression not permitted
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:323)
> Feb 07 05:43:21   ... 18 more
> Feb 07 05:43:21 Caused by: java.lang.IllegalArgumentException: 
> Self-suppression not permitted
> Feb 07 05:43:21   at 
> java.lang.Throwable.addSuppressed(Throwable.java:1072)
> Feb 07 05:43:21   at 
> org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:556)
> Feb 07 05:43:21   at 
> org.apache.flink.util.InstantiationUtil.writeObjectToConfig(InstantiationUtil.java:486)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamConfig.lambda$triggerSerializationAndReturnFuture$0(StreamConfig.java:182)
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.uniAcce

Re: [PR] [FLINK-34282][docker] Merge commit 44f05828 (pre-FLINK-34282 dev-master) into dev-1.19 [flink-docker]

2024-02-09 Thread via GitHub


XComp commented on PR #179:
URL: https://github.com/apache/flink-docker/pull/179#issuecomment-1935691236

   hm, handling the merge commit through GitHub seems to be not working as 
expected. I'm gonna close this PR and move the discussion into 
[FLINK-34282|https://issues.apache.org/jira/browse/FLINK-34282?focusedCommentId=17816007&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17816007]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34282) Create a release branch

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816007#comment-17816007
 ] 

Matthias Pohl commented on FLINK-34282:
---

The {{dev-1.19}} branch in 
[apache/flink-docker|https://github.com/apache/flink-docker/tree/dev-1.19] is 
based on {{master}} rather than {{{}dev-master{}}}. This causes issues like 
FLINK-34411.

[~lincoln.86xy] [~martijnvisser] [~jingge] [~yunta]  Are there any objections 
against force-pushing a rebase? ...since was only created 2 days ago.

> Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> Additionally in master, update the branch list of the GitHub Actions nightly 
> workflow (see 
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]):
>  The two most-recent releases and master should be covered.
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * {{master}} branch has the version information updated to the new version 
> 

[jira] [Closed] (FLINK-34267) Python connector test fails when running on MacBook with m1 processor

2024-02-09 Thread Gabor Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi closed FLINK-34267.
-

> Python connector test fails when running on MacBook with m1 processor
> -
>
> Key: FLINK-34267
> URL: https://issues.apache.org/jira/browse/FLINK-34267
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Build System / CI, Connectors / Common
> Environment: m1 MacBook Pro
> MacOS 14.2.1
>Reporter: Aleksandr Pilipenko
>Assignee: Aleksandr Pilipenko
>Priority: Major
>  Labels: pull-request-available
>
> Attempt to execute lint_python.sh on m1 macbook fails while trying to install 
> miniconda environment
> {code}
> =installing environment=
> installing wget...
> install wget... [SUCCESS]
> installing miniconda...
> download miniconda...
> download miniconda... [SUCCESS]
> installing conda...
> tail: illegal offset -- +018838: Invalid argument
> tail: illegal offset -- +018838: Invalid argument
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/download/miniconda.sh:
>  line 353: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/preconda.tar.bz2:
>  No such file or directory
> upgrade pip...
> ./dev/lint-python.sh: line 215: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/python: 
> No such file or directory
> upgrade pip... [SUCCESS]
> install conda ... [SUCCESS]
> install miniconda... [SUCCESS]
> installing python environment...
> installing python3.7...
> ./dev/lint-python.sh: line 247: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/conda: 
> No such file or directory
> conda install 3.7 retrying 1/3
> ./dev/lint-python.sh: line 254: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/conda: 
> No such file or directory
> conda install 3.7 retrying 2/3
> ./dev/lint-python.sh: line 254: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/conda: 
> No such file or directory
> conda install 3.7 retrying 3/3
> ./dev/lint-python.sh: line 254: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/conda: 
> No such file or directory
> conda install 3.7 failed after retrying 3 times.You can retry to 
> execute the script again.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34267) Python connector test fails when running on MacBook with m1 processor

2024-02-09 Thread Gabor Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi reassigned FLINK-34267:
-

Assignee: Aleksandr Pilipenko

> Python connector test fails when running on MacBook with m1 processor
> -
>
> Key: FLINK-34267
> URL: https://issues.apache.org/jira/browse/FLINK-34267
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Build System / CI, Connectors / Common
> Environment: m1 MacBook Pro
> MacOS 14.2.1
>Reporter: Aleksandr Pilipenko
>Assignee: Aleksandr Pilipenko
>Priority: Major
>  Labels: pull-request-available
>
> Attempt to execute lint_python.sh on m1 macbook fails while trying to install 
> miniconda environment
> {code}
> =installing environment=
> installing wget...
> install wget... [SUCCESS]
> installing miniconda...
> download miniconda...
> download miniconda... [SUCCESS]
> installing conda...
> tail: illegal offset -- +018838: Invalid argument
> tail: illegal offset -- +018838: Invalid argument
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/download/miniconda.sh:
>  line 353: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/preconda.tar.bz2:
>  No such file or directory
> upgrade pip...
> ./dev/lint-python.sh: line 215: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/python: 
> No such file or directory
> upgrade pip... [SUCCESS]
> install conda ... [SUCCESS]
> install miniconda... [SUCCESS]
> installing python environment...
> installing python3.7...
> ./dev/lint-python.sh: line 247: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/conda: 
> No such file or directory
> conda install 3.7 retrying 1/3
> ./dev/lint-python.sh: line 254: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/conda: 
> No such file or directory
> conda install 3.7 retrying 2/3
> ./dev/lint-python.sh: line 254: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/conda: 
> No such file or directory
> conda install 3.7 retrying 3/3
> ./dev/lint-python.sh: line 254: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/conda: 
> No such file or directory
> conda install 3.7 failed after retrying 3 times.You can retry to 
> execute the script again.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34267][CI] Update miniconda install script to fix build on MacOS [flink-connector-shared-utils]

2024-02-09 Thread via GitHub


gaborgsomogyi commented on PR #34:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/34#issuecomment-1935638003

   Thanks for the fix!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34267) Python connector test fails when running on MacBook with m1 processor

2024-02-09 Thread Gabor Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi resolved FLINK-34267.
---
Resolution: Fixed

[{{e6e1426}}|https://github.com/apache/flink-connector-shared-utils/commit/e6e14268b8316352031b25f4b67ed64dc142b683]
 on ci_utils

> Python connector test fails when running on MacBook with m1 processor
> -
>
> Key: FLINK-34267
> URL: https://issues.apache.org/jira/browse/FLINK-34267
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Build System / CI, Connectors / Common
> Environment: m1 MacBook Pro
> MacOS 14.2.1
>Reporter: Aleksandr Pilipenko
>Assignee: Aleksandr Pilipenko
>Priority: Major
>  Labels: pull-request-available
>
> Attempt to execute lint_python.sh on m1 macbook fails while trying to install 
> miniconda environment
> {code}
> =installing environment=
> installing wget...
> install wget... [SUCCESS]
> installing miniconda...
> download miniconda...
> download miniconda... [SUCCESS]
> installing conda...
> tail: illegal offset -- +018838: Invalid argument
> tail: illegal offset -- +018838: Invalid argument
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/download/miniconda.sh:
>  line 353: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/preconda.tar.bz2:
>  No such file or directory
> upgrade pip...
> ./dev/lint-python.sh: line 215: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/python: 
> No such file or directory
> upgrade pip... [SUCCESS]
> install conda ... [SUCCESS]
> install miniconda... [SUCCESS]
> installing python environment...
> installing python3.7...
> ./dev/lint-python.sh: line 247: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/conda: 
> No such file or directory
> conda install 3.7 retrying 1/3
> ./dev/lint-python.sh: line 254: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/conda: 
> No such file or directory
> conda install 3.7 retrying 2/3
> ./dev/lint-python.sh: line 254: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/conda: 
> No such file or directory
> conda install 3.7 retrying 3/3
> ./dev/lint-python.sh: line 254: 
> /Users/apilipenko/Dev/flink-connector-aws/flink-python/dev/.conda/bin/conda: 
> No such file or directory
> conda install 3.7 failed after retrying 3 times.You can retry to 
> execute the script again.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34267][CI] Update miniconda install script to fix build on MacOS [flink-connector-shared-utils]

2024-02-09 Thread via GitHub


gaborgsomogyi merged PR #34:
URL: https://github.com/apache/flink-connector-shared-utils/pull/34


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34267][CI] Update miniconda install script to fix build on MacOS [flink-connector-shared-utils]

2024-02-09 Thread via GitHub


gaborgsomogyi commented on PR #34:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/34#issuecomment-1935633037

   @pvary Thanks! Now I remember that the newest conda version would work when 
the connector would bump Flink version. Merging...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34411) "Wordcount on Docker test (custom fs plugin)" timed out with some strange issue while setting the test up

2024-02-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34411:
--
Affects Version/s: (was: 1.20.0)

> "Wordcount on Docker test (custom fs plugin)" timed out with some strange 
> issue while setting the test up
> -
>
> Key: FLINK-34411
> URL: https://issues.apache.org/jira/browse/FLINK-34411
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57380&view=logs&j=bea52777-eaf8-5663-8482-18fbc3630e81&t=43ba8ce7-ebbf-57cd-9163-444305d74117&l=5802
> {code}
> Feb 07 15:22:39 
> ==
> Feb 07 15:22:39 Running 'Wordcount on Docker test (custom fs plugin)'
> Feb 07 15:22:39 
> ==
> Feb 07 15:22:39 TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-39516987853
> Feb 07 15:22:40 Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT
> Feb 07 15:22:40 Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT
> Feb 07 15:22:41 Docker version 24.0.7, build afdd53b
> Feb 07 15:22:44 docker-compose version 1.29.2, build 5becea4c
> Feb 07 15:22:44 Starting fileserver for Flink distribution
> Feb 07 15:22:44 ~/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin 
> ~/work/1/s
> Feb 07 15:23:07 ~/work/1/s
> Feb 07 15:23:07 
> ~/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-39516987853
>  ~/work/1/s
> Feb 07 15:23:07 Preparing Dockeriles
> Feb 07 15:23:07 Executing command: git clone 
> https://github.com/apache/flink-docker.git --branch dev-1.19 --single-branch
> Cloning into 'flink-docker'...
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: 
> line 65: ./add-custom.sh: No such file or directory
> Feb 07 15:23:07 Building images
> ERROR: unable to prepare context: path "dev/test_docker_embedded_job-ubuntu" 
> not found
> Feb 07 15:23:09 ~/work/1/s
> Feb 07 15:23:09 Command: build_image test_docker_embedded_job failed. 
> Retrying...
> Feb 07 15:23:14 Starting fileserver for Flink distribution
> Feb 07 15:23:14 ~/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin 
> ~/work/1/s
> Feb 07 15:23:36 ~/work/1/s
> Feb 07 15:23:36 
> ~/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-39516987853
>  ~/work/1/s
> Feb 07 15:23:36 Preparing Dockeriles
> Feb 07 15:23:36 Executing command: git clone 
> https://github.com/apache/flink-docker.git --branch dev-1.19 --single-branch
> fatal: destination path 'flink-docker' already exists and is not an empty 
> directory.
> Feb 07 15:23:36 Retry 1/5 exited 128, retrying in 1 seconds...
> Traceback (most recent call last):
>   File 
> "/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/python3_fileserver.py",
>  line 26, in 
> httpd = socketserver.TCPServer(("", ), handler)
>   File "/usr/lib/python3.8/socketserver.py", line 452, in __init__
> self.server_bind()
>   File "/usr/lib/python3.8/socketserver.py", line 466, in server_bind
> self.socket.bind(self.server_address)
> OSError: [Errno 98] Address already in use
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34273) git fetch fails

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815976#comment-17815976
 ] 

Matthias Pohl commented on FLINK-34273:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57417&view=logs&j=f2c100be-250b-5e85-7bbe-176f68fcddc5&t=6d51823d-b341-5f58-cf42-40e574735727&l=359

> git fetch fails
> ---
>
> Key: FLINK-34273
> URL: https://issues.apache.org/jira/browse/FLINK-34273
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI, Test Infrastructure
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> We've seen multiple {{git fetch}} failures. I assume this to be an 
> infrastructure issue. This Jira issue is for documentation purposes.
> {code:java}
> error: RPC failed; curl 18 transfer closed with outstanding read data 
> remaining
> error: 5211 bytes of body are still expected
> fetch-pack: unexpected disconnect while reading sideband packet
> fatal: early EOF
> fatal: fetch-pack: invalid index-pack output {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57080&view=logs&j=0e7be18f-84f2-53f0-a32d-4a5e4a174679&t=5d6dc3d3-393d-5111-3a40-c6a5a36202e6&l=667



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-22040) Maven: Entry has not been leased from this pool / fix for e2e tests

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815975#comment-17815975
 ] 

Matthias Pohl commented on FLINK-22040:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57417&view=logs&j=b53e1644-5cb4-5a3b-5d48-f523f39bcf06&t=b68c9f5c-04c9-5c75-3862-a3a27aabbce3&l=2105

> Maven: Entry has not been leased from this pool / fix for e2e tests
> ---
>
> Key: FLINK-22040
> URL: https://issues.apache.org/jira/browse/FLINK-22040
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.12.2, 1.13.0, 1.15.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: auto-unassigned
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34403) VeryBigPbProtoToRowTest#testSimple cannot pass due to OOM

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815969#comment-17815969
 ] 

Matthias Pohl edited comment on FLINK-34403 at 2/9/24 8:55 AM:
---

* 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57406&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=995c650b-6573-581c-9ce6-7ad4cc038461&l=23861]
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57407&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=995c650b-6573-581c-9ce6-7ad4cc038461&l=23864]
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57417&view=logs&j=d871f0ce-7328-5d00-023b-e7391f5801c8&t=77cbea27-feb9-5cf5-53f7-3267f9f9c6b6&l=23438]
 * 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57417&view=logs&j=5cae8624-c7eb-5c51-92d3-4d2dacedd221&t=5acec1b4-945b-59ca-34f8-168928ce5199&l=23492


was (Author: mapohl):
* 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57406&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=995c650b-6573-581c-9ce6-7ad4cc038461&l=23861]
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57407&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=995c650b-6573-581c-9ce6-7ad4cc038461&l=23864]

> VeryBigPbProtoToRowTest#testSimple cannot pass due to OOM
> -
>
> Key: FLINK-34403
> URL: https://issues.apache.org/jira/browse/FLINK-34403
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.20.0
>Reporter: Benchao Li
>Priority: Critical
>  Labels: test-stability
>
> After FLINK-33611 merged, the misc test on GHA cannot pass due to out of 
> memory error, throwing following exceptions:
> {code:java}
> Error: 05:43:21 05:43:21.768 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 40.98 s <<< FAILURE! -- in 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest
> Error: 05:43:21 05:43:21.773 [ERROR] 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple -- Time 
> elapsed: 40.97 s <<< ERROR!
> Feb 07 05:43:21 org.apache.flink.util.FlinkRuntimeException: Error in 
> serialization.
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:327)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:162)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:1007)
> Feb 07 05:43:21   at 
> org.apache.flink.client.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:56)
> Feb 07 05:43:21   at 
> org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:45)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.PipelineExecutorUtils.getJobGraph(PipelineExecutorUtils.java:61)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.getJobGraph(LocalExecutor.java:104)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:81)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2440)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2421)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollectWithClient(DataStream.java:1495)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1382)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1367)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.validateRow(ProtobufTestHelper.java:66)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:89)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:76)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:71)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple(VeryBigPbRowToProtoTest.java:37)
> Feb 07 05:43:21   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 07 05:43:21 Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IllegalArgumentE

[jira] [Commented] (FLINK-34411) "Wordcount on Docker test (custom fs plugin)" timed out with some strange issue while setting the test up

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815973#comment-17815973
 ] 

Matthias Pohl commented on FLINK-34411:
---

Thanks for looking into it, [~ferenc-csaky] . That sounds like this is the 
issue, yeah.

> "Wordcount on Docker test (custom fs plugin)" timed out with some strange 
> issue while setting the test up
> -
>
> Key: FLINK-34411
> URL: https://issues.apache.org/jira/browse/FLINK-34411
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0, 1.20.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57380&view=logs&j=bea52777-eaf8-5663-8482-18fbc3630e81&t=43ba8ce7-ebbf-57cd-9163-444305d74117&l=5802
> {code}
> Feb 07 15:22:39 
> ==
> Feb 07 15:22:39 Running 'Wordcount on Docker test (custom fs plugin)'
> Feb 07 15:22:39 
> ==
> Feb 07 15:22:39 TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-39516987853
> Feb 07 15:22:40 Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT
> Feb 07 15:22:40 Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT
> Feb 07 15:22:41 Docker version 24.0.7, build afdd53b
> Feb 07 15:22:44 docker-compose version 1.29.2, build 5becea4c
> Feb 07 15:22:44 Starting fileserver for Flink distribution
> Feb 07 15:22:44 ~/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin 
> ~/work/1/s
> Feb 07 15:23:07 ~/work/1/s
> Feb 07 15:23:07 
> ~/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-39516987853
>  ~/work/1/s
> Feb 07 15:23:07 Preparing Dockeriles
> Feb 07 15:23:07 Executing command: git clone 
> https://github.com/apache/flink-docker.git --branch dev-1.19 --single-branch
> Cloning into 'flink-docker'...
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: 
> line 65: ./add-custom.sh: No such file or directory
> Feb 07 15:23:07 Building images
> ERROR: unable to prepare context: path "dev/test_docker_embedded_job-ubuntu" 
> not found
> Feb 07 15:23:09 ~/work/1/s
> Feb 07 15:23:09 Command: build_image test_docker_embedded_job failed. 
> Retrying...
> Feb 07 15:23:14 Starting fileserver for Flink distribution
> Feb 07 15:23:14 ~/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin 
> ~/work/1/s
> Feb 07 15:23:36 ~/work/1/s
> Feb 07 15:23:36 
> ~/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-39516987853
>  ~/work/1/s
> Feb 07 15:23:36 Preparing Dockeriles
> Feb 07 15:23:36 Executing command: git clone 
> https://github.com/apache/flink-docker.git --branch dev-1.19 --single-branch
> fatal: destination path 'flink-docker' already exists and is not an empty 
> directory.
> Feb 07 15:23:36 Retry 1/5 exited 128, retrying in 1 seconds...
> Traceback (most recent call last):
>   File 
> "/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/python3_fileserver.py",
>  line 26, in 
> httpd = socketserver.TCPServer(("", ), handler)
>   File "/usr/lib/python3.8/socketserver.py", line 452, in __init__
> self.server_bind()
>   File "/usr/lib/python3.8/socketserver.py", line 466, in server_bind
> self.socket.bind(self.server_address)
> OSError: [Errno 98] Address already in use
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34403) VeryBigPbProtoToRowTest#testSimple cannot pass due to OOM

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815969#comment-17815969
 ] 

Matthias Pohl edited comment on FLINK-34403 at 2/9/24 8:52 AM:
---

* 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57406&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=995c650b-6573-581c-9ce6-7ad4cc038461&l=23861]
 * 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57407&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=995c650b-6573-581c-9ce6-7ad4cc038461&l=23864]


was (Author: mapohl):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57406&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=995c650b-6573-581c-9ce6-7ad4cc038461&l=23861

> VeryBigPbProtoToRowTest#testSimple cannot pass due to OOM
> -
>
> Key: FLINK-34403
> URL: https://issues.apache.org/jira/browse/FLINK-34403
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.20.0
>Reporter: Benchao Li
>Priority: Critical
>  Labels: test-stability
>
> After FLINK-33611 merged, the misc test on GHA cannot pass due to out of 
> memory error, throwing following exceptions:
> {code:java}
> Error: 05:43:21 05:43:21.768 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 40.98 s <<< FAILURE! -- in 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest
> Error: 05:43:21 05:43:21.773 [ERROR] 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple -- Time 
> elapsed: 40.97 s <<< ERROR!
> Feb 07 05:43:21 org.apache.flink.util.FlinkRuntimeException: Error in 
> serialization.
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:327)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:162)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:1007)
> Feb 07 05:43:21   at 
> org.apache.flink.client.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:56)
> Feb 07 05:43:21   at 
> org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:45)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.PipelineExecutorUtils.getJobGraph(PipelineExecutorUtils.java:61)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.getJobGraph(LocalExecutor.java:104)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:81)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2440)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2421)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollectWithClient(DataStream.java:1495)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1382)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1367)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.validateRow(ProtobufTestHelper.java:66)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:89)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:76)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:71)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple(VeryBigPbRowToProtoTest.java:37)
> Feb 07 05:43:21   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 07 05:43:21 Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IllegalArgumentException: Self-suppression not permitted
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:323)
> Feb 07 05:43:21   ... 18 more
> Feb 07 05:43:21 Caused by: java.lang.IllegalArgumentException: 
> Self-suppression not permitted
> Feb 07 

[jira] [Commented] (FLINK-34411) "Wordcount on Docker test (custom fs plugin)" timed out with some strange issue while setting the test up

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815971#comment-17815971
 ] 

Matthias Pohl commented on FLINK-34411:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57410&view=logs&j=bea52777-eaf8-5663-8482-18fbc3630e81&t=43ba8ce7-ebbf-57cd-9163-444305d74117&l=4880

> "Wordcount on Docker test (custom fs plugin)" timed out with some strange 
> issue while setting the test up
> -
>
> Key: FLINK-34411
> URL: https://issues.apache.org/jira/browse/FLINK-34411
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0, 1.20.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57380&view=logs&j=bea52777-eaf8-5663-8482-18fbc3630e81&t=43ba8ce7-ebbf-57cd-9163-444305d74117&l=5802
> {code}
> Feb 07 15:22:39 
> ==
> Feb 07 15:22:39 Running 'Wordcount on Docker test (custom fs plugin)'
> Feb 07 15:22:39 
> ==
> Feb 07 15:22:39 TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-39516987853
> Feb 07 15:22:40 Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT
> Feb 07 15:22:40 Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT
> Feb 07 15:22:41 Docker version 24.0.7, build afdd53b
> Feb 07 15:22:44 docker-compose version 1.29.2, build 5becea4c
> Feb 07 15:22:44 Starting fileserver for Flink distribution
> Feb 07 15:22:44 ~/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin 
> ~/work/1/s
> Feb 07 15:23:07 ~/work/1/s
> Feb 07 15:23:07 
> ~/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-39516987853
>  ~/work/1/s
> Feb 07 15:23:07 Preparing Dockeriles
> Feb 07 15:23:07 Executing command: git clone 
> https://github.com/apache/flink-docker.git --branch dev-1.19 --single-branch
> Cloning into 'flink-docker'...
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: 
> line 65: ./add-custom.sh: No such file or directory
> Feb 07 15:23:07 Building images
> ERROR: unable to prepare context: path "dev/test_docker_embedded_job-ubuntu" 
> not found
> Feb 07 15:23:09 ~/work/1/s
> Feb 07 15:23:09 Command: build_image test_docker_embedded_job failed. 
> Retrying...
> Feb 07 15:23:14 Starting fileserver for Flink distribution
> Feb 07 15:23:14 ~/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin 
> ~/work/1/s
> Feb 07 15:23:36 ~/work/1/s
> Feb 07 15:23:36 
> ~/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-39516987853
>  ~/work/1/s
> Feb 07 15:23:36 Preparing Dockeriles
> Feb 07 15:23:36 Executing command: git clone 
> https://github.com/apache/flink-docker.git --branch dev-1.19 --single-branch
> fatal: destination path 'flink-docker' already exists and is not an empty 
> directory.
> Feb 07 15:23:36 Retry 1/5 exited 128, retrying in 1 seconds...
> Traceback (most recent call last):
>   File 
> "/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/python3_fileserver.py",
>  line 26, in 
> httpd = socketserver.TCPServer(("", ), handler)
>   File "/usr/lib/python3.8/socketserver.py", line 452, in __init__
> self.server_bind()
>   File "/usr/lib/python3.8/socketserver.py", line 466, in server_bind
> self.socket.bind(self.server_address)
> OSError: [Errno 98] Address already in use
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34403) VeryBigPbProtoToRowTest#testSimple cannot pass due to OOM

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815969#comment-17815969
 ] 

Matthias Pohl commented on FLINK-34403:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57406&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=995c650b-6573-581c-9ce6-7ad4cc038461&l=23861

> VeryBigPbProtoToRowTest#testSimple cannot pass due to OOM
> -
>
> Key: FLINK-34403
> URL: https://issues.apache.org/jira/browse/FLINK-34403
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.20.0
>Reporter: Benchao Li
>Priority: Critical
>  Labels: test-stability
>
> After FLINK-33611 merged, the misc test on GHA cannot pass due to out of 
> memory error, throwing following exceptions:
> {code:java}
> Error: 05:43:21 05:43:21.768 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 40.98 s <<< FAILURE! -- in 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest
> Error: 05:43:21 05:43:21.773 [ERROR] 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple -- Time 
> elapsed: 40.97 s <<< ERROR!
> Feb 07 05:43:21 org.apache.flink.util.FlinkRuntimeException: Error in 
> serialization.
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:327)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:162)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:1007)
> Feb 07 05:43:21   at 
> org.apache.flink.client.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:56)
> Feb 07 05:43:21   at 
> org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:45)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.PipelineExecutorUtils.getJobGraph(PipelineExecutorUtils.java:61)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.getJobGraph(LocalExecutor.java:104)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:81)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2440)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2421)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollectWithClient(DataStream.java:1495)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1382)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1367)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.validateRow(ProtobufTestHelper.java:66)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:89)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:76)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:71)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple(VeryBigPbRowToProtoTest.java:37)
> Feb 07 05:43:21   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 07 05:43:21 Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IllegalArgumentException: Self-suppression not permitted
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:323)
> Feb 07 05:43:21   ... 18 more
> Feb 07 05:43:21 Caused by: java.lang.IllegalArgumentException: 
> Self-suppression not permitted
> Feb 07 05:43:21   at 
> java.lang.Throwable.addSuppressed(Throwable.java:1072)
> Feb 07 05:43:21   at 
> org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:556)
> Feb 07 05:43:21   at 
> org.apache.flink.util.InstantiationUtil.writeObjectToConfig(InstantiationUtil.java:486)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamConfig.lambda$triggerSerializationAndReturnFutu

[jira] [Created] (FLINK-34418) YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots fail

2024-02-09 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34418:
-

 Summary: 
YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots
 failed due to disk space
 Key: FLINK-34418
 URL: https://issues.apache.org/jira/browse/FLINK-34418
 Project: Flink
  Issue Type: Bug
  Components: Test Infrastructure
Affects Versions: 1.18.1, 1.19.0, 1.20.0
Reporter: Matthias Pohl


[https://github.com/apache/flink/actions/runs/7838691874/job/21390739806#step:10:27746]
{code:java}
[...]
Feb 09 03:00:13 Caused by: java.io.IOException: No space left on device
27608Feb 09 03:00:13at java.io.FileOutputStream.writeBytes(Native Method)
27609Feb 09 03:00:13at 
java.io.FileOutputStream.write(FileOutputStream.java:326)
27610Feb 09 03:00:13at 
org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(OutputStreamManager.java:250)
27611Feb 09 03:00:13... 39 more
[...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26515) RetryingExecutorTest. testDiscardOnTimeout failed on azure

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815965#comment-17815965
 ] 

Matthias Pohl commented on FLINK-26515:
---

https://github.com/apache/flink/actions/runs/7838691874/job/21390763726#step:10:10503

> RetryingExecutorTest. testDiscardOnTimeout failed on azure
> --
>
> Key: FLINK-26515
> URL: https://issues.apache.org/jira/browse/FLINK-26515
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.14.3, 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
>
> {code:java}
> Mar 06 01:20:29 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 1.941 s <<< FAILURE! - in 
> org.apache.flink.changelog.fs.RetryingExecutorTest
> Mar 06 01:20:29 [ERROR] testTimeout  Time elapsed: 1.934 s  <<< FAILURE!
> Mar 06 01:20:29 java.lang.AssertionError: expected:<500.0> but 
> was:<1922.869766>
> Mar 06 01:20:29   at org.junit.Assert.fail(Assert.java:89)
> Mar 06 01:20:29   at org.junit.Assert.failNotEquals(Assert.java:835)
> Mar 06 01:20:29   at org.junit.Assert.assertEquals(Assert.java:555)
> Mar 06 01:20:29   at org.junit.Assert.assertEquals(Assert.java:685)
> Mar 06 01:20:29   at 
> org.apache.flink.changelog.fs.RetryingExecutorTest.testTimeout(RetryingExecutorTest.java:145)
> Mar 06 01:20:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 06 01:20:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 06 01:20:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 06 01:20:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 06 01:20:29   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Mar 06 01:20:29   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Mar 06 01:20:29   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Mar 06 01:20:29   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Mar 06 01:20:29   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Mar 06 01:20:29   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Mar 06 01:20:29   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43)
> Mar 06 01:20:29   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Mar 06 01:20:29   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Mar 06 01:20:29   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Mar 06 01:20:29   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Mar 06 01:20:29   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Mar 06 01:20:29   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32569&view=logs&j=f450c1a5-64b1-5955-e215-49cb1ad5ec88&t=cc452273-9efa-565d-9db8-ef62a38a0c10&l=22554



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-26515) RetryingExecutorTest. testDiscardOnTimeout failed on azure

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815965#comment-17815965
 ] 

Matthias Pohl edited comment on FLINK-26515 at 2/9/24 8:41 AM:
---

1.18: 
[https://github.com/apache/flink/actions/runs/7838691874/job/21390763726#step:10:10503]


was (Author: mapohl):
https://github.com/apache/flink/actions/runs/7838691874/job/21390763726#step:10:10503

> RetryingExecutorTest. testDiscardOnTimeout failed on azure
> --
>
> Key: FLINK-26515
> URL: https://issues.apache.org/jira/browse/FLINK-26515
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.14.3, 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
>
> {code:java}
> Mar 06 01:20:29 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 1.941 s <<< FAILURE! - in 
> org.apache.flink.changelog.fs.RetryingExecutorTest
> Mar 06 01:20:29 [ERROR] testTimeout  Time elapsed: 1.934 s  <<< FAILURE!
> Mar 06 01:20:29 java.lang.AssertionError: expected:<500.0> but 
> was:<1922.869766>
> Mar 06 01:20:29   at org.junit.Assert.fail(Assert.java:89)
> Mar 06 01:20:29   at org.junit.Assert.failNotEquals(Assert.java:835)
> Mar 06 01:20:29   at org.junit.Assert.assertEquals(Assert.java:555)
> Mar 06 01:20:29   at org.junit.Assert.assertEquals(Assert.java:685)
> Mar 06 01:20:29   at 
> org.apache.flink.changelog.fs.RetryingExecutorTest.testTimeout(RetryingExecutorTest.java:145)
> Mar 06 01:20:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 06 01:20:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 06 01:20:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 06 01:20:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 06 01:20:29   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Mar 06 01:20:29   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Mar 06 01:20:29   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Mar 06 01:20:29   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Mar 06 01:20:29   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Mar 06 01:20:29   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Mar 06 01:20:29   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43)
> Mar 06 01:20:29   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Mar 06 01:20:29   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Mar 06 01:20:29   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Mar 06 01:20:29   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Mar 06 01:20:29   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Mar 06 01:20:29   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32569&view=logs&j=f450c1a5-64b1-5955-e215-49cb1ad5ec88&t=cc452273-9efa-565d-9db8-ef62a38a0c10&l=22554



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30275) TaskExecutorTest.testSharedResourcesLifecycle fails

2024-02-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815964#comment-17815964
 ] 

Matthias Pohl commented on FLINK-30275:
---

https://github.com/apache/flink/actions/runs/7838691836/job/21390758520#step:10:8902

> TaskExecutorTest.testSharedResourcesLifecycle fails
> ---
>
> Key: FLINK-30275
> URL: https://issues.apache.org/jira/browse/FLINK-30275
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.17.0, 1.18.0
>Reporter: Matthias Pohl
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, stale-assigned, test-stability
>
> We observe a test failure in 
> {{TaskExecutorTest.testSharedResourcesLifecycle}}:
> {code}
> Dec 02 03:35:32 [ERROR] Tests run: 37, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 10.604 s <<< FAILURE! - in 
> org.apache.flink.runtime.taskexecutor.TaskExecutorTest
> Dec 02 03:35:32 [ERROR] 
> org.apache.flink.runtime.taskexecutor.TaskExecutorTest.testSharedResourcesLifecycle
>   Time elapsed: 0.549 s  <<< FAILURE!
> Dec 02 03:35:32 java.lang.AssertionError: expected:<0> but was:<1>
> Dec 02 03:35:32   at org.junit.Assert.fail(Assert.java:89)
> Dec 02 03:35:32   at org.junit.Assert.failNotEquals(Assert.java:835)
> Dec 02 03:35:32   at org.junit.Assert.assertEquals(Assert.java:647)
> Dec 02 03:35:32   at org.junit.Assert.assertEquals(Assert.java:633)
> Dec 02 03:35:32   at 
> org.apache.flink.runtime.taskexecutor.TaskExecutorTest.testSharedResourcesLifecycle(TaskExecutorTest.java:3080)
> [...]
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43662&view=logs&j=77a9d8e1-d610-59b3-fc2a-4766541e0e33&t=125e07e7-8de0-5c6c-a541-a567415af3ef&l=7466



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34394) Release Testing Instructions: Verify FLINK-33028 Make expanding behavior of virtual metadata columns configurable

2024-02-09 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815962#comment-17815962
 ] 

Timo Walther edited comment on FLINK-34394 at 2/9/24 8:36 AM:
--

Hi [~lincoln.86xy], this feature is rather small and should be covered well by 
tests already. Is it used in production already in our internal fork. I will 
close this issue.


was (Author: twalthr):
Hi [~lincoln.86xy], this feature is rather small and should be covered well by 
tests already. Is it used in production already in out internal fork. I will 
close this issue.

> Release Testing Instructions: Verify FLINK-33028 Make expanding behavior of 
> virtual metadata columns configurable
> -
>
> Key: FLINK-34394
> URL: https://issues.apache.org/jira/browse/FLINK-34394
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34394) Release Testing Instructions: Verify FLINK-33028 Make expanding behavior of virtual metadata columns configurable

2024-02-09 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815962#comment-17815962
 ] 

Timo Walther commented on FLINK-34394:
--

Hi [~lincoln.86xy], this feature is rather small and should be covered well by 
tests already. Is it used in production already in out internal fork. I will 
close this issue.

> Release Testing Instructions: Verify FLINK-33028 Make expanding behavior of 
> virtual metadata columns configurable
> -
>
> Key: FLINK-34394
> URL: https://issues.apache.org/jira/browse/FLINK-34394
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34410) Disable nightly trigger in forks

2024-02-09 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-34410.
---
Fix Version/s: 1.20.0
   Resolution: Fixed

master: 
[d2abd744621c6f0f65e7154a2c1b53bcaf78e90b|https://github.com/apache/flink/commit/d2abd744621c6f0f65e7154a2c1b53bcaf78e90b]

> Disable nightly trigger in forks
> 
>
> Key: FLINK-34410
> URL: https://issues.apache.org/jira/browse/FLINK-34410
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> We can disable the automatic triggering of the nightly trigger workflow in 
> fork (see [GHA 
> docs|https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions]s:
> {code}
> if: github.repository == 'octo-org/octo-repo-prod'
> {code}
> No backport is needed because the schedule triggers will on fire for 
> {{master}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34394) Release Testing Instructions: Verify FLINK-33028 Make expanding behavior of virtual metadata columns configurable

2024-02-09 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-34394.

Resolution: Fixed

> Release Testing Instructions: Verify FLINK-33028 Make expanding behavior of 
> virtual metadata columns configurable
> -
>
> Key: FLINK-34394
> URL: https://issues.apache.org/jira/browse/FLINK-34394
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34386][state] Add RocksDB bloom filter metrics [flink]

2024-02-09 Thread via GitHub


hejufang commented on PR #24274:
URL: https://github.com/apache/flink/pull/24274#issuecomment-1935521846

   @flinkbot  run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34410][ci] Disables the nightly trigger workflow for forks [flink]

2024-02-09 Thread via GitHub


XComp merged PR #24291:
URL: https://github.com/apache/flink/pull/24291


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression

2024-02-09 Thread Sebastien Pereira (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17815957#comment-17815957
 ] 

Sebastien Pereira commented on FLINK-28693:
---

We are facing similar issue in 1.18 - our analyses lead to this:

Once deployed, the job fails with: *_Could not instantiate generated class 
'WatermarkGenerator$0'_*
{code:java}
org.apache.flink.runtime.taskmanager.Task                    [] - Source: 
source_1___TABLE[1] -> Calc[2] -> KafkaSinkTable6[3]: Writer -> 
KafkaSinkTable6[3]: Committer (1/1)#0 
(b7ae8e7fdeab754fd21c02a17b7736aa_cbc357ccb763df2852fee8c4fc7d55f2_0_0) 
switched from RUNNING to FAILED with failure cause:
2024-02-08T16:25:56.613700525Z java.lang.RuntimeException: Could not 
instantiate generated class 'WatermarkGenerator$0'
2024-02-08T16:25:56.613703025Z     at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:74)
 ~[flink-table-runtime-1.18.0.jar:1.18.0]
[...]
Caused by: 
org.apache.flink.shaded.guava31.com.google.common.util.concurrent.UncheckedExecutionException:
 org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.{code}
To reproduce, the source table should meet these conditions:
 * the watermark definition is based on a computed column,
 * the computed column relies on a UDF,
 * uses the `kafka` connector

 
{code:java}
CREATE TABLE `source_1___TABLE`
(
  `ts` STRING,
  `ts___EVENT_TIME` AS CAST (TO_TIMESTAMP_UDF(`ts`) AS TIMESTAMP(3)),
  WATERMARK FOR `ts___EVENT_TIME` AS `ts___EVENT_TIME` - INTERVAL '1' MINUTE
){code}
Note that with the `filesystem` connector, such declaration deploys and consume 
events as expected.

Behavior is consistent when the job is created and deployed directly through 
the table API or through the SQL client using SQL statement definition.

> Codegen failed if the watermark is defined on a columnByExpression
> --
>
> Key: FLINK-28693
> URL: https://issues.apache.org/jira/browse/FLINK-28693
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.1
>Reporter: Hongbo
>Priority: Major
>  Labels: pull-request-available
>
> The following code will throw an exception:
>  
> {code:java}
> Table program cannot be compiled. This is a bug. Please file an issue.
>  ...
>  Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 
> 54: Cannot determine simple type name "org" {code}
> {color:#00}Code:{color}
> {code:java}
> public class TestUdf extends  ScalarFunction {
> @DataTypeHint("TIMESTAMP(3)")
> public LocalDateTime eval(String strDate) {
>return LocalDateTime.now();
> }
> }
> public class FlinkTest {
> @Test
> void testUdf() throws Exception {
> //var env = StreamExecutionEnvironment.createLocalEnvironment();
> // run `gradlew shadowJar` first to generate the uber jar.
> // It contains the kafka connector and a dummy UDF function.
> var env = 
> StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081,
> "build/libs/flink-test-all.jar");
> env.setParallelism(1);
> var tableEnv = StreamTableEnvironment.create(env);
> tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class);
> var testTable = tableEnv.from(TableDescriptor.forConnector("kafka")
> .schema(Schema.newBuilder()
> .column("time_stamp", DataTypes.STRING())
> .columnByExpression("udf_ts", "TEST_UDF(time_stamp)")
> .watermark("udf_ts", "udf_ts - INTERVAL '1' second")
> .build())
> // the kafka server doesn't need to exist. It fails in the 
> compile stage before fetching data.
> .option("properties.bootstrap.servers", "localhost:9092")
> .option("topic", "test_topic")
> .option("format", "json")
> .option("scan.startup.mode", "latest-offset")
> .build());
> testTable.printSchema();
> tableEnv.createTemporaryView("test", testTable );
> var query = tableEnv.sqlQuery("select * from test");
> var tableResult = 
> query.executeInsert(TableDescriptor.forConnector("print").build());
> tableResult.await();
> }
> }{code}
> What does the code do?
>  # read a stream from Kakfa
>  # create a derived column using an UDF expression
>  # define the watermark based on the derived column
> The full callstack:
>  
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>