[GitHub] [flink] gaoyunhaii commented on pull request #21423: [1.15] [FLINK-27341][runtime] Remove LOOPBACK from TaskManager findConnectingAddress.

2022-12-01 Thread GitBox


gaoyunhaii commented on PR #21423:
URL: https://github.com/apache/flink/pull/21423#issuecomment-171876

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hackeryard commented on a diff in pull request #20377: [FLINK-27338][hive] Improve splitting file for Hive table with orc format

2022-12-01 Thread GitBox


hackeryard commented on code in PR #20377:
URL: https://github.com/apache/flink/pull/20377#discussion_r1036803986


##
docs/content.zh/docs/connectors/table/hive/hive_read_write.md:
##
@@ -150,6 +150,41 @@ Flink 允许你灵活的配置并发推断策略。你可以在 `TableConfig` 
   
 
 
+### 读 Hive 表时调整数据分片(Split) 大小
+读 Hive 表时, 数据文件将会被切分为若干个分片(split), 每一个分片是要读取的数据的一部分。
+分片是 Flink 进行任务分配和数据并行读取的基本粒度。
+用户可以通过下面的参数来调整每个分片的大小来做一定的读性能调优。
+
+
+  
+
+Key
+Default
+Type
+Description
+
+  
+  
+
+table.exec.hive.split-max-size
+128mb
+MemorySize
+读 Hive 表时,每个分片最大可以包含的字节数 (默认是 128MB) 
+
+
+table.exec.hive.file-open-cost
+4mb
+MemorySize
+ 打开一个文件预估的开销,以字节为单位,默认是 4MB。
+ 如果这个值比较大,Flink 则将会倾向于将 Hive 表切分为更少的分片,这在 Hive 表中包含大量小文件的时候很有用。

Review Comment:
   @luoyuxia when there are a lot of small files, such 1M/file, I think the 
split number is as the same of file numbers. So I don't understand why open 
cost is a good way to solved this problem.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-30072) Cannot assign instance of SerializedLambda to field KeyGroupStreamPartitioner.keySelector

2022-12-01 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-30072:

Component/s: Runtime / Task
 (was: Runtime / Coordination)

> Cannot assign instance of SerializedLambda to field 
> KeyGroupStreamPartitioner.keySelector
> -
>
> Key: FLINK-30072
> URL: https://issues.apache.org/jira/browse/FLINK-30072
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.16.0
>Reporter: Nico Kruber
>Priority: Major
>
> In application mode, if the {{usrlib}} directories of the JM and TM differ, 
> e.g. same jars but different names, the job is failing and throws this 
> cryptic exception on the JM:
> {code}
> 2022-11-17 09:55:12,968 INFO  
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler [] - Restarting 
> job.
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Could not 
> instantiate outputs in order.
> at 
> org.apache.flink.streaming.api.graph.StreamConfig.getVertexNonChainedOutputs(StreamConfig.java:537)
>  ~[flink-dist-1.16.0-ok.0.jar:1.16.0-ok.0]
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.createRecordWriters(StreamTask.java:1600)
>  ~[flink-dist-1.16.0-ok.0.jar:1.16.0-ok.0]
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.createRecordWriterDelegate(StreamTask.java:1584)
>  ~[flink-dist-1.16.0-ok.0.jar:1.16.0-ok.0]
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.(StreamTask.java:408)
>  ~[flink-dist-1.16.0-ok.0.jar:1.16.0-ok.0]
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.(StreamTask.java:362)
>  ~[flink-dist-1.16.0-ok.0.jar:1.16.0-ok.0]
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.(StreamTask.java:335)
>  ~[flink-dist-1.16.0-ok.0.jar:1.16.0-ok.0]
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.(StreamTask.java:327)
>  ~[flink-dist-1.16.0-ok.0.jar:1.16.0-ok.0]
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.(StreamTask.java:317)
>  ~[flink-dist-1.16.0-ok.0.jar:1.16.0-ok.0]
> at 
> org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask.(SourceOperatorStreamTask.java:84)
>  ~[flink-dist-1.16.0-ok.0.jar:1.16.0-ok.0]
> at 
> jdk.internal.reflect.GeneratedConstructorAccessor38.newInstance(Unknown 
> Source) ~[?:?]
> at 
> jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
> Source) ~[?:?]
> at java.lang.reflect.Constructor.newInstance(Unknown Source) ~[?:?]
> at 
> org.apache.flink.runtime.taskmanager.Task.loadAndInstantiateInvokable(Task.java:1589)
>  ~[flink-dist-1.16.0-ok.0.jar:1.16.0-ok.0]
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:714) 
> ~[flink-dist-1.16.0-ok.0.jar:1.16.0-ok.0]
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550) 
> ~[flink-dist-1.16.0-ok.0.jar:1.16.0-ok.0]
> at java.lang.Thread.run(Unknown Source) ~[?:?]
> Caused by: java.lang.ClassCastException: cannot assign instance of 
> java.lang.invoke.SerializedLambda to field 
> org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.keySelector
>  of type org.apache.flink.api.java.functions.KeySelector in instance of 
> org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner
> at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(Unknown 
> Source) ~[?:?]
> at 
> java.io.ObjectStreamClass$FieldReflector.checkObjectFieldValueTypes(Unknown 
> Source) ~[?:?]
> at java.io.ObjectStreamClass.checkObjFieldValueTypes(Unknown Source) 
> ~[?:?]
> at java.io.ObjectInputStream.defaultCheckFieldValues(Unknown Source) 
> ~[?:?]
> at java.io.ObjectInputStream.readSerialData(Unknown Source) ~[?:?]
> at java.io.ObjectInputStream.readOrdinaryObject(Unknown Source) ~[?:?]
> at java.io.ObjectInputStream.readObject0(Unknown Source) ~[?:?]
> at java.io.ObjectInputStream.defaultReadFields(Unknown Source) ~[?:?]
> at java.io.ObjectInputStream.readSerialData(Unknown Source) ~[?:?]
> at java.io.ObjectInputStream.readOrdinaryObject(Unknown Source) ~[?:?]
> at java.io.ObjectInputStream.readObject0(Unknown Source) ~[?:?]
> at java.io.ObjectInputStream.readObject(Unknown Source) ~[?:?]
> at java.io.ObjectInputStream.readObject(Unknown Source) ~[?:?]
> at java.util.ArrayList.readObject(Unknown Source) ~[?:?]
> at jdk.internal.reflect.GeneratedMethodAccessor16.invoke(Unknown Source) 
> ~[?:?]
> at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown 
> Source) ~[?:?]
> at java.lang.reflect.Method.invoke(Unknown Source) ~[?:?]
> at java.io.ObjectStreamClass.invokeReadObject(Unknown Source) ~[?:?]
> at java.io.ObjectInputStream.readSerialData(Unknown Source) ~[?:?

[jira] [Assigned] (FLINK-30239) The flame graph doesn't work due to groupExecutionsByLocation has bug

2022-12-01 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang reassigned FLINK-30239:
--

Assignee: Rui Fan  (was: Lijie Wang)

> The flame graph doesn't work due to groupExecutionsByLocation has bug
> -
>
> Key: FLINK-30239
> URL: https://issues.apache.org/jira/browse/FLINK-30239
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST, Runtime / Web Frontend
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.16.1
>
> Attachments: image-2022-11-30-00-10-48-940.png, 
> image-2022-11-30-00-11-09-728.png, image-2022-11-30-00-14-11-355.png
>
>
> The flame graph cannot be generated forever when multiple tasks in the same 
> TM. It's caused by FLINK-26074
>  
> h1. Root cause:
> A Set cannot be converted to an ImmutableSet during the aggregation of 
> ExecutionAttemptIDs. It will cause only the first ExecutionAttemptID of the 
> TM to be added to the set, the second ExecutionAttemptID will fail.
>  
> !image-2022-11-30-00-14-11-355.png!
>  
>  
>  
> !image-2022-11-30-00-11-09-728.png!
>  
> Exception Info: 
> !image-2022-11-30-00-10-48-940.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-30239) The flame graph doesn't work due to groupExecutionsByLocation has bug

2022-12-01 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang reassigned FLINK-30239:
--

Assignee: Lijie Wang

> The flame graph doesn't work due to groupExecutionsByLocation has bug
> -
>
> Key: FLINK-30239
> URL: https://issues.apache.org/jira/browse/FLINK-30239
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST, Runtime / Web Frontend
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Rui Fan
>Assignee: Lijie Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.16.1
>
> Attachments: image-2022-11-30-00-10-48-940.png, 
> image-2022-11-30-00-11-09-728.png, image-2022-11-30-00-14-11-355.png
>
>
> The flame graph cannot be generated forever when multiple tasks in the same 
> TM. It's caused by FLINK-26074
>  
> h1. Root cause:
> A Set cannot be converted to an ImmutableSet during the aggregation of 
> ExecutionAttemptIDs. It will cause only the first ExecutionAttemptID of the 
> TM to be added to the set, the second ExecutionAttemptID will fail.
>  
> !image-2022-11-30-00-14-11-355.png!
>  
>  
>  
> !image-2022-11-30-00-11-09-728.png!
>  
> Exception Info: 
> !image-2022-11-30-00-10-48-940.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30081) Local executor can not accept different jvm-overhead.min/max values

2022-12-01 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641751#comment-17641751
 ] 

Yun Gao commented on FLINK-30081:
-

Hi [~liuml07]  the mini-cluster does not start new processes, it just executes 
different components in different threads. Thus I think it do not have chance 
to change memory settings, and you might increase the memory of the tests on 
startup directly. 

> Local executor can not accept different jvm-overhead.min/max values
> ---
>
> Key: FLINK-30081
> URL: https://issues.apache.org/jira/browse/FLINK-30081
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Affects Versions: 1.16.0
>Reporter: Mingliang Liu
>Priority: Major
>
> In local executor, it's not possible to set different values for 
> {{taskmanager.memory.jvm-overhead.max}} and 
> {{{}taskmanager.memory.jvm-overhead.min{}}}. The same problem for 
> {{taskmanager.memory.network.max}} and {{{}taskmanager.memory.network.min{}}}.
> Sample code to reproduce:
> {code:java}
> Configuration conf = new Configuration();
> conf.setString(TaskManagerOptions.JVM_OVERHEAD_MIN.key(), "1GB");
> conf.setString(TaskManagerOptions.JVM_OVERHEAD_MAX.key(), "2GB");
> StreamExecutionEnvironment.createLocalEnvironment(conf)
> .fromElements("Hello", "World")
> .executeAndCollect()
> .forEachRemaining(System.out::println); {code}
> The failing exception is something like:
> {code:java}
> Exception in thread "main" java.lang.IllegalArgumentException
>   at org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:122)
>   at 
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.calculateTotalProcessMemoryFromComponents(TaskExecutorResourceUtils.java:182)
>   at 
> org.apache.flink.runtime.taskexecutor.TaskExecutorMemoryConfiguration.create(TaskExecutorMemoryConfiguration.java:119)
> {code}
> I think the problem was that we expect the max and min to equal, but local 
> executor did not reset them correctly?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30257) SqlClientITCase#testMatchRecognize failed

2022-12-01 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-30257:
--

 Summary: SqlClientITCase#testMatchRecognize failed
 Key: FLINK-30257
 URL: https://issues.apache.org/jira/browse/FLINK-30257
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.17.0
Reporter: Martijn Visser


{code:java}
Nov 30 21:54:41 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 224.683 s <<< FAILURE! - in SqlClientITCase
Nov 30 21:54:41 [ERROR] SqlClientITCase.testMatchRecognize  Time elapsed: 
50.164 s  <<< FAILURE!
Nov 30 21:54:41 org.opentest4j.AssertionFailedError: 
Nov 30 21:54:41 
Nov 30 21:54:41 expected: 1
Nov 30 21:54:41  but was: 0
Nov 30 21:54:41 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
Nov 30 21:54:41 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
Nov 30 21:54:41 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
Nov 30 21:54:41 at 
SqlClientITCase.verifyNumberOfResultRecords(SqlClientITCase.java:297)
Nov 30 21:54:41 at 
SqlClientITCase.testMatchRecognize(SqlClientITCase.java:255)
Nov 30 21:54:41 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Nov 30 21:54:41 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Nov 30 21:54:41 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Nov 30 21:54:41 at java.lang.reflect.Method.invoke(Method.java:498)
Nov 30 21:54:41 at 
org.junit.platform.commons.util.ReflectionUtils.invokeMetho
{code}

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43635&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a&l=14817



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29405) InputFormatCacheLoaderTest is unstable

2022-12-01 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641754#comment-17641754
 ] 

Martijn Visser commented on FLINK-29405:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43636&view=logs&j=a9db68b9-a7e0-54b6-0f98-010e0aff39e2&t=cdd32e0b-6047-565b-c58f-14054472f1be&l=10879

> InputFormatCacheLoaderTest is unstable
> --
>
> Key: FLINK-29405
> URL: https://issues.apache.org/jira/browse/FLINK-29405
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Chesnay Schepler
>Assignee: Alexander Smirnov
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> #testExceptionDuringReload/#testCloseAndInterruptDuringReload fail reliably 
> when run in a loop.
> {code}
> java.lang.AssertionError: 
> Expecting AtomicInteger(0) to have value:
>   0
> but did not.
>   at 
> org.apache.flink.table.runtime.functions.table.fullcache.inputformat.InputFormatCacheLoaderTest.testCloseAndInterruptDuringReload(InputFormatCacheLoaderTest.java:161)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29940) ExecutionGraph logs job state change at ERROR level when job fails

2022-12-01 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-29940.

Resolution: Won't Do

> ExecutionGraph logs job state change at ERROR level when job fails
> --
>
> Key: FLINK-29940
> URL: https://issues.apache.org/jira/browse/FLINK-29940
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.16.0
>Reporter: Mingliang Liu
>Priority: Minor
>  Labels: pull-request-available
>
> When job switched to FAILED state, the log is very useful to understand why 
> it failed along with the root cause exception stack. However, the current log 
> level is INFO - a bit inconvenient for users to search from logging with so 
> many surrounding log lines. We can log at ERROR level when the job switched 
> to FAILED state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29940) ExecutionGraph logs job state change at ERROR level when job fails

2022-12-01 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641758#comment-17641758
 ] 

Xintong Song commented on FLINK-29940:
--

Agree with [~gaoyunhaii] that the state changing of a job, even the state is 
FAILED, should not be considered an error of the framework.

I think there're good reasons that support both ways, treating a job failure as 
an error or not. However, Flink has been the current way since most likely its 
first day, with lots of users being used to it. I don't think it's necessary to 
break this.

[~liuml07], for this or any other logs that are particularly interested, I 
think the proper way is to filter them out with keywords / regex, rather than 
promoting the logs to a higher level.

Closing the ticket as Won't Do.

> ExecutionGraph logs job state change at ERROR level when job fails
> --
>
> Key: FLINK-29940
> URL: https://issues.apache.org/jira/browse/FLINK-29940
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.16.0
>Reporter: Mingliang Liu
>Priority: Minor
>  Labels: pull-request-available
>
> When job switched to FAILED state, the log is very useful to understand why 
> it failed along with the root cause exception stack. However, the current log 
> level is INFO - a bit inconvenient for users to search from logging with so 
> many surrounding log lines. We can log at ERROR level when the job switched 
> to FAILED state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] JingsongLi commented on a diff in pull request #411: [FLINK-30207] Move split initialization and discovery logic fully into SnapshotEnumerator in Table Store

2022-12-01 Thread GitBox


JingsongLi commented on code in PR #411:
URL: https://github.com/apache/flink-table-store/pull/411#discussion_r1036797509


##
flink-table-store-core/src/main/java/org/apache/flink/table/store/table/source/snapshot/SnapshotEnumerator.java:
##
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.table.source.snapshot;
+
+import org.apache.flink.table.store.table.source.DataTableScan;
+
+import java.util.concurrent.Callable;
+
+/**
+ * Enumerate incremental changes from newly created snapshots.
+ *
+ * The first call to this enumerator will produce a {@link 
DataTableScan.DataFilePlan} containing
+ * the base files for the following incremental changes (or just return null 
if there are no base
+ * files).
+ *
+ * Following calls to this enumerator will produce {@link 
DataTableScan.DataFilePlan}s containing
+ * incremental changed files. If there is currently no newer snapshots, null 
will be returned
+ * instead.
+ */
+public interface SnapshotEnumerator extends 
Callable {}

Review Comment:
   Can it not be a `Callable`? I found it is hard to find the caller in IDE. 
Like:
   ```
   SnapshotEnumerator {
  @Nullable
  DataTableScan.DataFilePlan enumerate();
   }
   ```



##
flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/source/ContinuousFileStoreSource.java:
##
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.connector.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.table.store.CoreOptions;
+import org.apache.flink.table.store.file.predicate.Predicate;
+import org.apache.flink.table.store.table.FileStoreTable;
+import org.apache.flink.table.store.table.source.DataTableScan;
+import 
org.apache.flink.table.store.table.source.snapshot.DeltaSnapshotEnumerator;
+import 
org.apache.flink.table.store.table.source.snapshot.FullCompactionChangelogSnapshotEnumerator;
+import 
org.apache.flink.table.store.table.source.snapshot.InputChangelogSnapshotEnumerator;
+import org.apache.flink.table.store.table.source.snapshot.SnapshotEnumerator;
+
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.Collection;
+
+/** Unbounded {@link FlinkSource} for reading records. It continuously 
monitors new snapshots. */
+public class ContinuousFileStoreSource extends FlinkSource {
+
+private static final long serialVersionUID = 1L;
+
+private final FileStoreTable table;
+private final long discoveryInterval;
+
+public ContinuousFileStoreSource(
+FileStoreTable table,
+long discoveryInterval,
+@Nullable int[][] projectedFields,
+@Nullable Predicate predicate,
+@Nullable Long limit) {
+super(table, projectedFields, predicate, limit);
+this.table = table;
+this.discoveryInterval = discoveryInterval;
+}
+
+@Override
+public Boundedness getBoundedness() {
+return Boundedness.CONTINUOUS_UNBOUNDED;
+}
+
+@Override
+public SplitEnumerator 
restoreEnumerator(
+SplitEnumeratorContext context,
+PendingSplitsCheckpoint checkpoint) {
+Dat

[jira] [Commented] (FLINK-30131) flink iterate will suspend when record is a bit large

2022-12-01 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641760#comment-17641760
 ] 

Yun Gao commented on FLINK-30131:
-

Hi [~landlord] from the attached image, it seems the iteration sink now has a 
busy = 100%, which seems to back-pressure previous tasks. Could you also have a 
check if the sink is a bottleneck?

> flink iterate will suspend when record is a bit large
> -
>
> Key: FLINK-30131
> URL: https://issues.apache.org/jira/browse/FLINK-30131
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.15.2
>Reporter: Lu
>Priority: Major
> Attachments: image-2022-11-22-14-59-08-272.png, 
> image-2022-11-24-17-10-45-651.png, image-2022-11-24-17-12-02-129.png, 
> image-2022-11-24-17-12-47-024.png
>
>
>  
> {code:java}
> //代码占位符
> Configuration configuration = new Configuration();
> configuration.setInteger(RestOptions.PORT, 8082);
> configuration.setInteger(NETWORK_MAX_BUFFERS_PER_CHANNEL, 1000);
> configuration.set(TaskManagerOptions.NETWORK_MEMORY_MAX, 
> MemorySize.parse("4g"));
> configuration.setInteger("taskmanager.network.memory.buffers-per-channel", 
> 1000);
> configuration.setInteger("taskmanager.network.memory.floating-buffers-per-gate",
>  1000);
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment(configuration);
> env.setParallelism(1);
> List list = new ArrayList<>(10);
> for (int i = 1; i < 1; i++) {
> list.add(i);
> }
> DataStreamSource integerDataStreamSource = env.fromCollection(list);
> DataStream map = integerDataStreamSource.map(i -> new 
> byte[1000]).setParallelism(1).name("map to byte[]").shuffle();
> IterativeStream iterate = map.iterate();
> DataStream map1 = iterate.process(new ProcessFunction byte[]>() {
> @Override
> public void processElement(byte[] value, ProcessFunction byte[]>.Context ctx, Collector out) throws Exception {
> out.collect(value);
> }
> }).name("multi collect");
> DataStream filter = map1.filter(i -> true 
> ).setParallelism(1).name("feedback");
> iterate.closeWith(filter);
> map1.map(bytes -> bytes.length).name("map to length").print();
> env.execute(); {code}
> my code is above.
>  
> when i use iterate with big record ,  the iterate will suspend at a random 
> place. when i saw the stack, it has a suspicious thread
> !image-2022-11-22-14-59-08-272.png|width=751,height=328!
> it seems like a network related problem. so i increse the network buffer 
> memory and num. but it only delay the suspend point,  it will still suspend 
> after iterate a little more times than before.
> i want to know if this is a bug or i have some error in my code or 
> configuration.
> looking forward to your reply. thanks in advance.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29934) maven-assembly-plugin 2.4 make assemble plugin could find xml in flink clients module

2022-12-01 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641761#comment-17641761
 ] 

Xintong Song commented on FLINK-29934:
--

I also cannot reproduce this.

[~jackylau], do you still have this problem?

> maven-assembly-plugin 2.4 make assemble plugin could find xml in flink 
> clients module
> -
>
> Key: FLINK-29934
> URL: https://issues.apache.org/jira/browse/FLINK-29934
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.17.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
> Attachments: image-2022-11-08-20-28-00-814.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> !image-2022-11-08-20-28-00-814.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] XComp commented on pull request #21411: [BP-1.16][FLINK-29830][Connector/Pulsar] Create the topic with schema before consuming messages in PulsarSinkITCase. (#21252)

2022-12-01 Thread GitBox


XComp commented on PR #21411:
URL: https://github.com/apache/flink/pull/21411#issuecomment-1333408169

   @MartijnVisser there is no `v3.0` branch in [apache/flink-connector-pulsar]( 
https://github.com/apache/flink-connector-pulsar)(,yet?). :thinking: 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29925) table ui of configure value is strange

2022-12-01 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641765#comment-17641765
 ] 

Xintong Song commented on FLINK-29925:
--

[~bytesmith], could you explain a bit more about what is being strange? I don't 
really get it from the screenshot.

> table ui of configure value is strange
> --
>
> Key: FLINK-29925
> URL: https://issues.apache.org/jira/browse/FLINK-29925
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.3
>Reporter: jiadong.lu
>Priority: Minor
> Attachments: 截屏2022-11-08 15.37.04.png
>
>
> As shown in the figure below, when the configure value is very large, the ui 
> of the table is a bit strange  !截屏2022-11-08 15.37.04.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29167) Time out-of-order optimization for merging multiple data streams into one data stream

2022-12-01 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-29167:

Fix Version/s: (was: 1.14.2)

>  Time out-of-order optimization for merging multiple data streams into one 
> data stream
> --
>
> Key: FLINK-29167
> URL: https://issues.apache.org/jira/browse/FLINK-29167
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.14.2
>Reporter: zhangyang
>Priority: Major
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> Problem Description: 
>      I have many demand scenarios and need to combine more than 2 data 
> streams (DataStreams) into one data stream. The business behind the data 
> stream processing requires the time sequence of events to complete the scene 
> requirements, so I use the union operator of flink to The confluence is 
> completed, but the data after the confluence does not guarantee its original 
> event time sequence.
> {code:java}
> dataStream0 = dataStream0.union(dataStreamArray);  {code}
> Design suggestion: 
>     When designing the source code, you can merge into the stream in the 
> order of the array in the dataStreamArray instead of random order.
>  
> Solution suggestion: 
>    At present, I use windowAll to sort the data after the confluence in 
> chronological order, and complete the overall scene realization, but the 
> parallelism of windowAll can only be 1, which affects the performance of the 
> entire directed acyclic graph. In addition, there are two confluence scene 
> sorting scenes. I haven't thought of a good remedy, so I can only think that 
> the union of the union is the sequence, which can save a lot of unnecessary 
> trouble for the event-time stream merging.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] wsry commented on a diff in pull request #21217: [FLINK-29565]Repair that the web ui log is not highlighted

2022-12-01 Thread GitBox


wsry commented on code in PR #21217:
URL: https://github.com/apache/flink/pull/21217#discussion_r1036837155


##
flink-runtime/src/main/java/org/apache/flink/runtime/util/EnvironmentInformation.java:
##
@@ -469,7 +469,9 @@ public static void logEnvironmentInfo(
 }
 }
 
-log.info(" Classpath: " + System.getProperty("java.class.path"));
+String classPath = System.getProperty("java.class.path");
+
+log.info(" Classpath: " + classPath.replace("/*", "/ *"));

Review Comment:
   Any update? If there is no simple way to solve the issue at the UI side, 
let's not do that.
   For this single case, I think replacing "/*" with "/ *" is not a good 
solution because it changes the path itself. I am not sure if surrounding the 
classpath string with '"' solves the problem. If so, I think that will be a 
better solution. What do you think? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29364) Root cause of Exceptions thrown in the SourceReader start() method gets "swallowed".

2022-12-01 Thread Lijie Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641770#comment-17641770
 ] 

Lijie Wang commented on FLINK-29364:


I will close this ticket because it doesn't seem to exist and the reporter is 
no longer active. Feel free to reopen it if any further question.

> Root cause of Exceptions thrown in the SourceReader start() method gets 
> "swallowed".
> 
>
> Key: FLINK-29364
> URL: https://issues.apache.org/jira/browse/FLINK-29364
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.15.3
>Reporter: Alexander Fedulov
>Priority: Major
>
> If an exception is thrown in the {_}SourceReader{_}'s _start()_ method, its 
> root cause does not get captured.
> The details are still available here: 
> [Task.java#L758|https://github.com/apache/flink/blob/bccecc23067eb7f18e20bade814be73393401be5/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L758]
> But the execution falls through to 
> [Task.java#L780|https://github.com/apache/flink/blob/bccecc23067eb7f18e20bade814be73393401be5/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L780]
>   and discards the root cause of
> canceling the source invokable without recording the actual reason.
>  
> Hot to reproduce: 
> [DataGeneratorSourceITCase.java#L117|https://github.com/afedulov/flink/blob/3df7669fcc6ba08c5147195b80cc97ac1481ec8c/flink-tests/src/test/java/org/apache/flink/api/connector/source/lib/DataGeneratorSourceITCase.java#L117]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29364) Root cause of Exceptions thrown in the SourceReader start() method gets "swallowed".

2022-12-01 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang closed FLINK-29364.
--
Resolution: Not A Problem

> Root cause of Exceptions thrown in the SourceReader start() method gets 
> "swallowed".
> 
>
> Key: FLINK-29364
> URL: https://issues.apache.org/jira/browse/FLINK-29364
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.15.3
>Reporter: Alexander Fedulov
>Priority: Major
>
> If an exception is thrown in the {_}SourceReader{_}'s _start()_ method, its 
> root cause does not get captured.
> The details are still available here: 
> [Task.java#L758|https://github.com/apache/flink/blob/bccecc23067eb7f18e20bade814be73393401be5/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L758]
> But the execution falls through to 
> [Task.java#L780|https://github.com/apache/flink/blob/bccecc23067eb7f18e20bade814be73393401be5/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L780]
>   and discards the root cause of
> canceling the source invokable without recording the actual reason.
>  
> Hot to reproduce: 
> [DataGeneratorSourceITCase.java#L117|https://github.com/afedulov/flink/blob/3df7669fcc6ba08c5147195b80cc97ac1481ec8c/flink-tests/src/test/java/org/apache/flink/api/connector/source/lib/DataGeneratorSourceITCase.java#L117]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29925) table ui of configure value is strange

2022-12-01 Thread jiadong.lu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiadong.lu updated FLINK-29925:
---
Description: As shown in the figure below, when the configure value is very 
large, the ui of the table is a bit strange  !截屏2022-11-08 
15.37.04.png|width=856,height=496!  (was: As shown in the figure below, when 
the configure value is very large, the ui of the table is a bit strange  
!截屏2022-11-08 15.37.04.png!)

> table ui of configure value is strange
> --
>
> Key: FLINK-29925
> URL: https://issues.apache.org/jira/browse/FLINK-29925
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.3
>Reporter: jiadong.lu
>Priority: Minor
> Attachments: 截屏2022-11-08 15.37.04.png
>
>
> As shown in the figure below, when the configure value is very large, the ui 
> of the table is a bit strange  !截屏2022-11-08 
> 15.37.04.png|width=856,height=496!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] hackeryard commented on a diff in pull request #20016: [FLINK-27857][hive] HiveSource supports filter push down for orc format

2022-12-01 Thread GitBox


hackeryard commented on code in PR #20016:
URL: https://github.com/apache/flink/pull/20016#discussion_r1036848127


##
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSource.java:
##
@@ -257,6 +279,12 @@ public void applyProjection(int[][] projectedFields, 
DataType producedDataType)
 this.projectedFields = Arrays.stream(projectedFields).mapToInt(value 
-> value[0]).toArray();
 }
 
+@Override
+public Result applyFilters(List filters) {
+this.filters = new ArrayList<>(filters);
+return Result.of(new ArrayList<>(filters), new ArrayList<>(filters));

Review Comment:
   @luoyuxia what if to add a method to indenify format in HiveTableSource, and 
when implement applyFilter, we can get this format info, so to better implement 
applyFilter?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29167) Time out-of-order optimization for merging multiple data streams into one data stream

2022-12-01 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641771#comment-17641771
 ] 

Zhu Zhu commented on FLINK-29167:
-

The record orders are retained if they are from the same source and are sent to 
the same downstream task.
Even without a {{union}},  the time of records are never guaranteed to be 
sorted after a shuffle from different subtasks of a source.

>  Time out-of-order optimization for merging multiple data streams into one 
> data stream
> --
>
> Key: FLINK-29167
> URL: https://issues.apache.org/jira/browse/FLINK-29167
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.14.2
>Reporter: zhangyang
>Priority: Major
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> Problem Description: 
>      I have many demand scenarios and need to combine more than 2 data 
> streams (DataStreams) into one data stream. The business behind the data 
> stream processing requires the time sequence of events to complete the scene 
> requirements, so I use the union operator of flink to The confluence is 
> completed, but the data after the confluence does not guarantee its original 
> event time sequence.
> {code:java}
> dataStream0 = dataStream0.union(dataStreamArray);  {code}
> Design suggestion: 
>     When designing the source code, you can merge into the stream in the 
> order of the array in the dataStreamArray instead of random order.
>  
> Solution suggestion: 
>    At present, I use windowAll to sort the data after the confluence in 
> chronological order, and complete the overall scene realization, but the 
> parallelism of windowAll can only be 1, which affects the performance of the 
> entire directed acyclic graph. In addition, there are two confluence scene 
> sorting scenes. I haven't thought of a good remedy, so I can only think that 
> the union of the union is the sequence, which can save a lot of unnecessary 
> trouble for the event-time stream merging.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29167) Time out-of-order optimization for merging multiple data streams into one data stream

2022-12-01 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-29167.
---
Resolution: Not A Problem

Closing the ticket because it does not seem to be a problem and is not active 
for months.

>  Time out-of-order optimization for merging multiple data streams into one 
> data stream
> --
>
> Key: FLINK-29167
> URL: https://issues.apache.org/jira/browse/FLINK-29167
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.14.2
>Reporter: zhangyang
>Priority: Major
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> Problem Description: 
>      I have many demand scenarios and need to combine more than 2 data 
> streams (DataStreams) into one data stream. The business behind the data 
> stream processing requires the time sequence of events to complete the scene 
> requirements, so I use the union operator of flink to The confluence is 
> completed, but the data after the confluence does not guarantee its original 
> event time sequence.
> {code:java}
> dataStream0 = dataStream0.union(dataStreamArray);  {code}
> Design suggestion: 
>     When designing the source code, you can merge into the stream in the 
> order of the array in the dataStreamArray instead of random order.
>  
> Solution suggestion: 
>    At present, I use windowAll to sort the data after the confluence in 
> chronological order, and complete the overall scene realization, but the 
> parallelism of windowAll can only be 1, which affects the performance of the 
> entire directed acyclic graph. In addition, there are two confluence scene 
> sorting scenes. I haven't thought of a good remedy, so I can only think that 
> the union of the union is the sequence, which can save a lot of unnecessary 
> trouble for the event-time stream merging.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30202) RateLimitedSourceReader may emit one more record than permitted

2022-12-01 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641773#comment-17641773
 ] 

Martijn Visser commented on FLINK-30202:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43636&view=logs&j=5cae8624-c7eb-5c51-92d3-4d2dacedd221&t=5acec1b4-945b-59ca-34f8-168928ce5199&l=24837

> RateLimitedSourceReader may emit one more record than permitted
> ---
>
> Key: FLINK-30202
> URL: https://issues.apache.org/jira/browse/FLINK-30202
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available, test-stability
>
>  [This 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43483&view=logs&j=5cae8624-c7eb-5c51-92d3-4d2dacedd221&t=5acec1b4-945b-59ca-34f8-168928ce5199&l=24747]
>  failed due to a failed assertion in 
> {{{}DataGeneratorSourceITCase.testGatedRateLimiter{}}}:
> {code:java}
> Nov 25 03:26:45 org.opentest4j.AssertionFailedError: 
> Nov 25 03:26:45 
> Nov 25 03:26:45 expected: 2
> Nov 25 03:26:45  but was: 1 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29461) ProcessDataStreamStreamingTests.test_process_function unstable

2022-12-01 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641775#comment-17641775
 ] 

Martijn Visser commented on FLINK-29461:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43636&view=logs&j=e92ecf6d-e207-5a42-7ff7-528ff0c5b259&t=40fc352e-9b4c-5fd8-363f-628f24b01ec2&l=27587

> ProcessDataStreamStreamingTests.test_process_function unstable
> --
>
> Key: FLINK-29461
> URL: https://issues.apache.org/jira/browse/FLINK-29461
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-29T02:10:45.3571648Z Sep 29 02:10:45 self = 
>  testMethod=test_process_function>
> 2022-09-29T02:10:45.3572279Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3572810Z Sep 29 02:10:45 def 
> test_process_function(self):
> 2022-09-29T02:10:45.3573495Z Sep 29 02:10:45 
> self.env.set_parallelism(1)
> 2022-09-29T02:10:45.3574148Z Sep 29 02:10:45 
> self.env.get_config().set_auto_watermark_interval(2000)
> 2022-09-29T02:10:45.3580634Z Sep 29 02:10:45 
> self.env.set_stream_time_characteristic(TimeCharacteristic.EventTime)
> 2022-09-29T02:10:45.3583194Z Sep 29 02:10:45 data_stream = 
> self.env.from_collection([(1, '1603708211000'),
> 2022-09-29T02:10:45.3584515Z Sep 29 02:10:45  
>(2, '1603708224000'),
> 2022-09-29T02:10:45.3585957Z Sep 29 02:10:45  
>(3, '1603708226000'),
> 2022-09-29T02:10:45.3587132Z Sep 29 02:10:45  
>(4, '1603708289000')],
> 2022-09-29T02:10:45.3588094Z Sep 29 02:10:45  
>   type_info=Types.ROW([Types.INT(), Types.STRING()]))
> 2022-09-29T02:10:45.3589090Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3589949Z Sep 29 02:10:45 class 
> MyProcessFunction(ProcessFunction):
> 2022-09-29T02:10:45.3590710Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3591856Z Sep 29 02:10:45 def 
> process_element(self, value, ctx):
> 2022-09-29T02:10:45.3592873Z Sep 29 02:10:45 
> current_timestamp = ctx.timestamp()
> 2022-09-29T02:10:45.3593862Z Sep 29 02:10:45 
> current_watermark = ctx.timer_service().current_watermark()
> 2022-09-29T02:10:45.3594915Z Sep 29 02:10:45 yield "current 
> timestamp: {}, current watermark: {}, current_value: {}"\
> 2022-09-29T02:10:45.3596201Z Sep 29 02:10:45 
> .format(str(current_timestamp), str(current_watermark), str(value))
> 2022-09-29T02:10:45.3597089Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3597942Z Sep 29 02:10:45 watermark_strategy = 
> WatermarkStrategy.for_monotonous_timestamps()\
> 2022-09-29T02:10:45.3599260Z Sep 29 02:10:45 
> .with_timestamp_assigner(SecondColumnTimestampAssigner())
> 2022-09-29T02:10:45.3600611Z Sep 29 02:10:45 
> data_stream.assign_timestamps_and_watermarks(watermark_strategy)\
> 2022-09-29T02:10:45.3601877Z Sep 29 02:10:45 
> .process(MyProcessFunction(), 
> output_type=Types.STRING()).add_sink(self.test_sink)
> 2022-09-29T02:10:45.3603527Z Sep 29 02:10:45 self.env.execute('test 
> process function')
> 2022-09-29T02:10:45.3604445Z Sep 29 02:10:45 results = 
> self.test_sink.get_results()
> 2022-09-29T02:10:45.3605684Z Sep 29 02:10:45 expected = ["current 
> timestamp: 1603708211000, current watermark: "
> 2022-09-29T02:10:45.3607157Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=1, f1='1603708211000')",
> 2022-09-29T02:10:45.3608256Z Sep 29 02:10:45 "current 
> timestamp: 1603708224000, current watermark: "
> 2022-09-29T02:10:45.3609650Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=2, f1='1603708224000')",
> 2022-09-29T02:10:45.3610854Z Sep 29 02:10:45 "current 
> timestamp: 1603708226000, current watermark: "
> 2022-09-29T02:10:45.3612279Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=3, f1='1603708226000')",
> 2022-09-29T02:10:45.3613382Z Sep 29 02:10:45 "current 
> timestamp: 1603708289000, current watermark: "
> 2022-09-29T02:10:45.3615683Z Sep 29 02:10:45 
> "-9223372036854775808, current_value: Row(f0=4, f1='1603708289000')"]
> 2022-09-29T02:10:45.3617687Z Sep 29 02:10:45 >   
> self.assert_equals_sorted(expected, results)
> 2022-09-29T02:10:45.3618620Z Sep 29 02:10:45 
> 2022-09-29T02:10:45.3619425Z Sep 29 02:10:45 
> pyflink/da

[jira] [Commented] (FLINK-29925) table ui of configure value is strange

2022-12-01 Thread jiadong.lu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641777#comment-17641777
 ] 

jiadong.lu commented on FLINK-29925:


[~xtsong] hello,may be the screenshot is too big to find the problem. i've 
fixed it. As you can see, when the `config value` is long enough, the `config 
key` cell's width  become very small. It looks like so strange when i open the 
config page first time.

> table ui of configure value is strange
> --
>
> Key: FLINK-29925
> URL: https://issues.apache.org/jira/browse/FLINK-29925
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.3
>Reporter: jiadong.lu
>Priority: Minor
> Attachments: 截屏2022-11-08 15.37.04.png
>
>
> As shown in the figure below, when the configure value is very large, the ui 
> of the table is a bit strange  !截屏2022-11-08 
> 15.37.04.png|width=856,height=496!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30258) PyFlink supports closing loop back server

2022-12-01 Thread Xuannan Su (Jira)
Xuannan Su created FLINK-30258:
--

 Summary: PyFlink supports closing loop back server
 Key: FLINK-30258
 URL: https://issues.apache.org/jira/browse/FLINK-30258
 Project: Flink
  Issue Type: Improvement
  Components: API / Python
Affects Versions: 1.16.0
Reporter: Xuannan Su


Currently, a loopback server will be started whenever a 
StreamExecutionEnvironment or StreamTableEnvironment is created. The loopback 
server can only be closed after the process exit. This might not be a problem 
for regular uses where only one environment object is used.

However, when running tests, such as the unit tests for PyFlink itself, as the 
environment objects are created, the process starts more and more loopback 
servers and takes more and more resources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30258) PyFlink supports closing loop back server

2022-12-01 Thread Xuannan Su (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuannan Su updated FLINK-30258:
---
Description: 
Currently, a loopback server will be started whenever a 
StreamExecutionEnvironment or StreamTableEnvironment is created. The loopback 
server can only be closed after the process exit. This might not be a problem 
for regular uses where only one environment object is used.

However, when running tests, such as the unit tests for PyFlink itself, as the 
environment objects are created, the process starts more and more loopback 
servers and takes more and more resources.

Therefore, we want to support closing the loopback server.

  was:
Currently, a loopback server will be started whenever a 
StreamExecutionEnvironment or StreamTableEnvironment is created. The loopback 
server can only be closed after the process exit. This might not be a problem 
for regular uses where only one environment object is used.

However, when running tests, such as the unit tests for PyFlink itself, as the 
environment objects are created, the process starts more and more loopback 
servers and takes more and more resources.


> PyFlink supports closing loop back server
> -
>
> Key: FLINK-30258
> URL: https://issues.apache.org/jira/browse/FLINK-30258
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Affects Versions: 1.16.0
>Reporter: Xuannan Su
>Priority: Major
>
> Currently, a loopback server will be started whenever a 
> StreamExecutionEnvironment or StreamTableEnvironment is created. The loopback 
> server can only be closed after the process exit. This might not be a problem 
> for regular uses where only one environment object is used.
> However, when running tests, such as the unit tests for PyFlink itself, as 
> the environment objects are created, the process starts more and more 
> loopback servers and takes more and more resources.
> Therefore, we want to support closing the loopback server.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29925) table ui of configure value is strange

2022-12-01 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641778#comment-17641778
 ] 

Xintong Song commented on FLINK-29925:
--

Ok, I see. That's a good point. Do you want to work on fixing this? Or I can 
try to get someone with frontend expertise to look into this.

> table ui of configure value is strange
> --
>
> Key: FLINK-29925
> URL: https://issues.apache.org/jira/browse/FLINK-29925
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.3
>Reporter: jiadong.lu
>Priority: Minor
> Attachments: 截屏2022-11-08 15.37.04.png
>
>
> As shown in the figure below, when the configure value is very large, the ui 
> of the table is a bit strange  !截屏2022-11-08 
> 15.37.04.png|width=856,height=496!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30258) PyFlink supports closing loopback server

2022-12-01 Thread Xuannan Su (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuannan Su updated FLINK-30258:
---
Summary: PyFlink supports closing loopback server  (was: PyFlink supports 
closing loop back server)

> PyFlink supports closing loopback server
> 
>
> Key: FLINK-30258
> URL: https://issues.apache.org/jira/browse/FLINK-30258
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Affects Versions: 1.16.0
>Reporter: Xuannan Su
>Priority: Major
>
> Currently, a loopback server will be started whenever a 
> StreamExecutionEnvironment or StreamTableEnvironment is created. The loopback 
> server can only be closed after the process exit. This might not be a problem 
> for regular uses where only one environment object is used.
> However, when running tests, such as the unit tests for PyFlink itself, as 
> the environment objects are created, the process starts more and more 
> loopback servers and takes more and more resources.
> Therefore, we want to support closing the loopback server.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29925) table ui of configure value is strange

2022-12-01 Thread jiadong.lu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641780#comment-17641780
 ] 

jiadong.lu commented on FLINK-29925:


[~xtsong] sorry, I don't have enough frontend expertise to fix it.

> table ui of configure value is strange
> --
>
> Key: FLINK-29925
> URL: https://issues.apache.org/jira/browse/FLINK-29925
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.3
>Reporter: jiadong.lu
>Priority: Minor
> Attachments: 截屏2022-11-08 15.37.04.png
>
>
> As shown in the figure below, when the configure value is very large, the ui 
> of the table is a bit strange  !截屏2022-11-08 
> 15.37.04.png|width=856,height=496!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27916) HybridSourceReaderTest.testReader failed with AssertionError

2022-12-01 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641785#comment-17641785
 ] 

Martijn Visser commented on FLINK-27916:


1.16: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43638&view=logs&j=fa307d6d-91b1-5ab6-d460-ef50f552b1fe&t=21eae189-b04c-5c04-662b-17dc80ffc83a&l=8564

> HybridSourceReaderTest.testReader failed with AssertionError
> 
>
> Key: FLINK-27916
> URL: https://issues.apache.org/jira/browse/FLINK-27916
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Attachments: Screen Shot 2022-07-21 at 5.51.40 PM.png
>
>
> {code:java}
> 2022-06-05T07:47:33.3332158Z Jun 05 07:47:33 [ERROR] Tests run: 3, Failures: 
> 1, Errors: 0, Skipped: 0, Time elapsed: 2.03 s <<< FAILURE! - in 
> org.apache.flink.connector.base.source.hybrid.HybridSourceReaderTest
> 2022-06-05T07:47:33.3334366Z Jun 05 07:47:33 [ERROR] 
> org.apache.flink.connector.base.source.hybrid.HybridSourceReaderTest.testReader
>   Time elapsed: 0.108 s  <<< FAILURE!
> 2022-06-05T07:47:33.3335385Z Jun 05 07:47:33 java.lang.AssertionError: 
> 2022-06-05T07:47:33.3336049Z Jun 05 07:47:33 
> 2022-06-05T07:47:33.3336682Z Jun 05 07:47:33 Expected size: 1 but was: 0 in:
> 2022-06-05T07:47:33.3337316Z Jun 05 07:47:33 []
> 2022-06-05T07:47:33.3338437Z Jun 05 07:47:33  at 
> org.apache.flink.connector.base.source.hybrid.HybridSourceReaderTest.assertAndClearSourceReaderFinishedEvent(HybridSourceReaderTest.java:199)
> 2022-06-05T07:47:33.3340082Z Jun 05 07:47:33  at 
> org.apache.flink.connector.base.source.hybrid.HybridSourceReaderTest.testReader(HybridSourceReaderTest.java:96)
> 2022-06-05T07:47:33.3341373Z Jun 05 07:47:33  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-06-05T07:47:33.3342540Z Jun 05 07:47:33  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-06-05T07:47:33.3344124Z Jun 05 07:47:33  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-06-05T07:47:33.3345283Z Jun 05 07:47:33  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2022-06-05T07:47:33.3346804Z Jun 05 07:47:33  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-06-05T07:47:33.3348218Z Jun 05 07:47:33  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-06-05T07:47:33.3349495Z Jun 05 07:47:33  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-06-05T07:47:33.3350779Z Jun 05 07:47:33  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-06-05T07:47:33.3351956Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-06-05T07:47:33.3357032Z Jun 05 07:47:33  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-06-05T07:47:33.3358633Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-06-05T07:47:33.3360003Z Jun 05 07:47:33  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-06-05T07:47:33.3361924Z Jun 05 07:47:33  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-06-05T07:47:33.3363427Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-06-05T07:47:33.3364793Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-06-05T07:47:33.3365619Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-06-05T07:47:33.3366254Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-06-05T07:47:33.3366939Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-06-05T07:47:33.3367556Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-06-05T07:47:33.3368268Z Jun 05 07:47:33  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-06-05T07:47:33.3369166Z Jun 05 07:47:33  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-06-05T07:47:33.3369993Z Jun 05 07:47:33  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-06-05T07:47:33.3371021Z Jun 05 07:47:33  at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> 2022-06-05T07:47:33.3372128Z Jun 05 07:47:33  at 

[jira] [Commented] (FLINK-24095) Elasticsearch7DynamicSinkITCase.testWritingDocuments fails due to socket timeout

2022-12-01 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641784#comment-17641784
 ] 

Martijn Visser commented on FLINK-24095:


1.16: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43638&view=logs&j=a1ac4ce4-9a4f-5fdb-3290-7e163fba19dc&t=3a8f44aa-4415-5b14-37d5-5fecc568b139&l=15772

> Elasticsearch7DynamicSinkITCase.testWritingDocuments fails due to socket 
> timeout
> 
>
> Key: FLINK-24095
> URL: https://issues.apache.org/jira/browse/FLINK-24095
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.14.0, 1.15.0, 1.16.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.17.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23250&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d&l=12781
> {code}
> Aug 31 23:06:22 Caused by: java.net.SocketTimeoutException: 30,000 
> milliseconds timeout on connection http-outgoing-3 [ACTIVE]
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:808)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestClient.performRequest(RestClient.java:248)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1514)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1499)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestHighLevelClient.ping(RestHighLevelClient.java:720)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge.verifyClientConnection(Elasticsearch7ApiCallBridge.java:138)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge.verifyClientConnection(Elasticsearch7ApiCallBridge.java:46)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:318)
> Aug 31 23:06:22   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:100)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:46)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:110)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:691)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:667)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:639)
> Aug 31 23:06:22   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
> Aug 31 23:06:22   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
> Aug 31 23:06:22   at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
> Aug 31 23:06:22   at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
> Aug 31 23:06:22   at java.lang.Thread.run(Thread.java:748)
> Aug 31 23:06:22 Caused by: java.net.SocketTimeoutException: 30,000 
> milliseconds timeout on connection http-outgoing-3 [ACTIVE]
> Aug 31 23:06:22   at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.timeout(HttpAsyncRequestExecutor.java:387)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:92)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:39)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.AbstractIODispatch.timeout(AbstractIODispatch.java:175)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.sessionTimedOut(BaseIOReactor.java:261)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.timeoutCheck(AbstractIOReactor.java:502)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.validate(BaseIOReactor.java:211)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor

[jira] [Commented] (FLINK-27917) PulsarUnorderedPartitionSplitReaderTest.consumeMessageCreatedBeforeHandleSplitsChangesAndResetToEarliestPosition failed with AssertionError

2022-12-01 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641786#comment-17641786
 ] 

Martijn Visser commented on FLINK-27917:


1.15: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43637&view=logs&j=298e20ef-7951-5965-0e79-ea664ddc435e&t=d4c90338-c843-57b0-3232-10ae74f00347&l=28411

> PulsarUnorderedPartitionSplitReaderTest.consumeMessageCreatedBeforeHandleSplitsChangesAndResetToEarliestPosition
>  failed with AssertionError
> ---
>
> Key: FLINK-27917
> URL: https://issues.apache.org/jira/browse/FLINK-27917
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.14.5, 1.15.1
>Reporter: Huang Xingbo
>Assignee: Yufan Sheng
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.7, 1.16.1, 1.15.4
>
>
> {code:java}
> 2022-06-06T06:34:46.7906026Z Jun 06 06:34:46 [ERROR] 
> org.apache.flink.connector.pulsar.source.reader.split.PulsarUnorderedPartitionSplitReaderTest.consumeMessageCreatedBeforeHandleSplitsChangesAndResetToEarliestPosition(PulsarPartitionSplitReaderBase)[1]
>   Time elapsed: 9.774 s  <<< FAILURE!
> 2022-06-06T06:34:46.7919217Z Jun 06 06:34:46 java.lang.AssertionError: 
> 2022-06-06T06:34:46.7920918Z Jun 06 06:34:46 [We should fetch the expected 
> size] 
> 2022-06-06T06:34:46.7921479Z Jun 06 06:34:46 Expected size: 20 but was: 3 in:
> 2022-06-06T06:34:46.7922019Z Jun 06 06:34:46 [PulsarMessage{id=58:0:0:0, 
> value=ElpTDLGvKz, eventTime=0},
> 2022-06-06T06:34:46.7922757Z Jun 06 06:34:46 PulsarMessage{id=58:1:0:0, 
> value=cDGEGcCZnP, eventTime=0},
> 2022-06-06T06:34:46.7924900Z Jun 06 06:34:46 PulsarMessage{id=58:2:0:0, 
> value=rZmaCxrhZF, eventTime=0}]
> 2022-06-06T06:34:46.7926359Z Jun 06 06:34:46  at 
> org.apache.flink.connector.pulsar.source.reader.split.PulsarPartitionSplitReaderTestBase.fetchedMessages(PulsarPartitionSplitReaderTestBase.java:186)
> 2022-06-06T06:34:46.7928019Z Jun 06 06:34:46  at 
> org.apache.flink.connector.pulsar.source.reader.split.PulsarPartitionSplitReaderTestBase.fetchedMessages(PulsarPartitionSplitReaderTestBase.java:156)
> 2022-06-06T06:34:46.7930207Z Jun 06 06:34:46  at 
> org.apache.flink.connector.pulsar.source.reader.split.PulsarPartitionSplitReaderTestBase.consumeMessageCreatedBeforeHandleSplitsChangesAndResetToEarliestPosition(PulsarPartitionSplitReaderTestBase.java:247)
> 2022-06-06T06:34:46.7931943Z Jun 06 06:34:46  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-06-06T06:34:46.7933282Z Jun 06 06:34:46  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-06-06T06:34:46.7934885Z Jun 06 06:34:46  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-06-06T06:34:46.7936182Z Jun 06 06:34:46  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2022-06-06T06:34:46.7937301Z Jun 06 06:34:46  at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
> 2022-06-06T06:34:46.7938744Z Jun 06 06:34:46  at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> 2022-06-06T06:34:46.7939650Z Jun 06 06:34:46  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> 2022-06-06T06:34:46.7940516Z Jun 06 06:34:46  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> 2022-06-06T06:34:46.7941737Z Jun 06 06:34:46  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> 2022-06-06T06:34:46.7942588Z Jun 06 06:34:46  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
> 2022-06-06T06:34:46.7943874Z Jun 06 06:34:46  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> 2022-06-06T06:34:46.7945291Z Jun 06 06:34:46  at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> 2022-06-06T06:34:46.7946812Z Jun 06 06:34:46  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> 2022-06-06T06:34:46.7948852Z Jun 06 06:34:46  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> 2022-06-06T06:34:46.7950462Z Jun 06 06:34:46  at 
> org.junit.jupiter.engine.execution.In

[jira] [Commented] (FLINK-28319) test_ci tests times out after/during running org.apache.flink.test.streaming.experimental

2022-12-01 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641788#comment-17641788
 ] 

Martijn Visser commented on FLINK-28319:


1.15: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43637&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=21940

> test_ci tests times out after/during running 
> org.apache.flink.test.streaming.experimental
> -
>
> Key: FLINK-28319
> URL: https://issues.apache.org/jira/browse/FLINK-28319
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.15.3
>Reporter: Martijn Visser
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Jun 30 03:16:25 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 4.109 s - in 
> org.apache.flink.test.streaming.experimental.CollectITCase
> ==
> === WARNING: This task took already 95% of the available time budget of 237 
> minutes ===
> ==
> ==
> The following Java processes are running (JPS)
> ==
> 932 Launcher
> 20281 Jps
> 17930 surefirebooter3147893032508885212.jar
> ==
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37384&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba&l=6280



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28319) ResumeCheckpointManuallyITCase gets stuck

2022-12-01 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-28319:
---
Summary: ResumeCheckpointManuallyITCase gets stuck  (was: test_ci tests 
times out after/during running org.apache.flink.test.streaming.experimental)

> ResumeCheckpointManuallyITCase gets stuck
> -
>
> Key: FLINK-28319
> URL: https://issues.apache.org/jira/browse/FLINK-28319
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.15.3
>Reporter: Martijn Visser
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Jun 30 03:16:25 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 4.109 s - in 
> org.apache.flink.test.streaming.experimental.CollectITCase
> ==
> === WARNING: This task took already 95% of the available time budget of 237 
> minutes ===
> ==
> ==
> The following Java processes are running (JPS)
> ==
> 932 Launcher
> 20281 Jps
> 17930 surefirebooter3147893032508885212.jar
> ==
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37384&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba&l=6280



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-elasticsearch] MartijnVisser commented on a diff in pull request #23: [FLINK-28428][connectors/elasticsearch][docs][docs-zh] Fix example and Chinese translations

2022-12-01 Thread GitBox


MartijnVisser commented on code in PR #23:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/23#discussion_r1036889587


##
docs/content/docs/connectors/datastream/elasticsearch.md:
##
@@ -226,17 +226,60 @@ To use fault tolerant Elasticsearch Sinks, checkpointing 
of the topology needs t
 
 {{< tabs "d00d1e93-4844-40d7-b0ec-9ec37e73145e" >}}
 {{< tab "Java" >}}
+
+Elasticsearch 6:
+```java
+final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+env.enableCheckpointing(5000); // checkpoint every 5000 msecs
+
+Elasticsearch6SinkBuilder sinkBuilder = new Elasticsearch6SinkBuilder()
+.setDeliveryGuarantee(DeliveryGuarantee.AT_LEAST_ONCE)
+.setHosts(new HttpHost("127.0.0.1", 9200, "http"))
+.setEmitter(
+(element, context, indexer) -> 
+indexer.add(createIndexRequest(element)));

Review Comment:
   The documentation at 
https://nightlies.apache.org/flink/flink-docs-master/api/java/org/apache/flink/connector/elasticsearch/sink/Elasticsearch6SinkBuilder.html
 is correct from what I've understood



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] MartijnVisser commented on pull request #21411: [BP-1.16][FLINK-29830][Connector/Pulsar] Create the topic with schema before consuming messages in PulsarSinkITCase. (#21252)

2022-12-01 Thread GitBox


MartijnVisser commented on PR #21411:
URL: https://github.com/apache/flink/pull/21411#issuecomment-1333487912

   Meh it's fixed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-30259) Using flink Preconditions Util instead of uncertain Assert keyword to do checking

2022-12-01 Thread Ran Tao (Jira)
Ran Tao created FLINK-30259:
---

 Summary: Using flink Preconditions Util instead of uncertain 
Assert keyword to do checking
 Key: FLINK-30259
 URL: https://issues.apache.org/jira/browse/FLINK-30259
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
SQL / API
Affects Versions: 1.16.0
Reporter: Ran Tao


The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false), otherwise it may lead to 
unexpected behavior. In fact, flink already has a mature Preconditions tool, we 
can use it to replace 'assert' keyword. it is more clean and consistent with 
flink.

The following is an example of some patch. (by using idea, we can fix it 
easily.)

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}







--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] JingsongLi merged pull request #412: [hotfix] Minor code refactor: avoid casting in SparkWrite

2022-12-01 Thread GitBox


JingsongLi merged PR #412:
URL: https://github.com/apache/flink-table-store/pull/412


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-30259) Using flink Preconditions Util instead of uncertain Assert keyword to do checking

2022-12-01 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Component/s: Deployment / Kubernetes

> Using flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> -
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Formats (JSON, Avro, Parquet, 
> ORC, SequenceFile), Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Ran Tao
>Priority: Major
>
> The code of some modules of the current Flink project uses the 'assert' 
> keyword of java to do checking, which actually depends on the enablement of 
> the -enableassertions (-ea) option (default is false), otherwise it may lead 
> to unexpected behavior. In fact, flink already has a mature Preconditions 
> tool, we can use it to replace 'assert' keyword. it is more clean and 
> consistent with flink.
> The following is an example of some patch. (by using idea, we can fix it 
> easily.)
> RowDataPrintFunction
> {code:java}
> @Override
> public void invoke(RowData value, Context context) {
> Object data = converter.toExternal(value);
> assert data != null;
> writer.write(data.toString());
> }
> {code}
> KubernetesUtils
> {code:java}
> public static KubernetesConfigMap checkConfigMaps(
> List configMaps, String 
> expectedConfigMapName) {
> assert (configMaps.size() == 1);
> assert (configMaps.get(0).getName().equals(expectedConfigMapName));
> return configMaps.get(0);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30259) Using flink Preconditions Util instead of uncertain Assert keyword to do checking

2022-12-01 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Component/s: Table SQL / Planner
 (was: Table SQL / API)

> Using flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> -
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Ran Tao
>Priority: Major
>
> The code of some modules of the current Flink project uses the 'assert' 
> keyword of java to do checking, which actually depends on the enablement of 
> the -enableassertions (-ea) option (default is false), otherwise it may lead 
> to unexpected behavior. In fact, flink already has a mature Preconditions 
> tool, we can use it to replace 'assert' keyword. it is more clean and 
> consistent with flink.
> The following is an example of some patch. (by using idea, we can fix it 
> easily.)
> RowDataPrintFunction
> {code:java}
> @Override
> public void invoke(RowData value, Context context) {
> Object data = converter.toExternal(value);
> assert data != null;
> writer.write(data.toString());
> }
> {code}
> KubernetesUtils
> {code:java}
> public static KubernetesConfigMap checkConfigMaps(
> List configMaps, String 
> expectedConfigMapName) {
> assert (configMaps.size() == 1);
> assert (configMaps.get(0).getName().equals(expectedConfigMapName));
> return configMaps.get(0);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30260) FLIP-271: Autoscaling

2022-12-01 Thread Maximilian Michels (Jira)
Maximilian Michels created FLINK-30260:
--

 Summary: FLIP-271: Autoscaling
 Key: FLINK-30260
 URL: https://issues.apache.org/jira/browse/FLINK-30260
 Project: Flink
  Issue Type: New Feature
  Components: Kubernetes Operator
Reporter: Maximilian Michels
Assignee: Maximilian Michels


https://cwiki.apache.org/confluence/display/FLINK/FLIP-271%3A+Autoscaling



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2022-12-01 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Summary: Use flink Preconditions Util instead of uncertain Assert keyword 
to do checking  (was: Using flink Preconditions Util instead of uncertain 
Assert keyword to do checking)

> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Formats (JSON, Avro, Parquet, 
> ORC, SequenceFile), Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Ran Tao
>Priority: Major
>
> The code of some modules of the current Flink project uses the 'assert' 
> keyword of java to do checking, which actually depends on the enablement of 
> the -enableassertions (-ea) option (default is false), otherwise it may lead 
> to unexpected behavior. In fact, flink already has a mature Preconditions 
> tool, we can use it to replace 'assert' keyword. it is more clean and 
> consistent with flink.
> The following is an example of some patch. (by using idea, we can fix it 
> easily.)
> RowDataPrintFunction
> {code:java}
> @Override
> public void invoke(RowData value, Context context) {
> Object data = converter.toExternal(value);
> assert data != null;
> writer.write(data.toString());
> }
> {code}
> KubernetesUtils
> {code:java}
> public static KubernetesConfigMap checkConfigMaps(
> List configMaps, String 
> expectedConfigMapName) {
> assert (configMaps.size() == 1);
> assert (configMaps.get(0).getName().equals(expectedConfigMapName));
> return configMaps.get(0);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30260) FLIP-271: Add Autoscaling implementation

2022-12-01 Thread Maximilian Michels (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels updated FLINK-30260:
---
Summary: FLIP-271: Add Autoscaling implementation  (was: FLIP-271: 
Autoscaling)

> FLIP-271: Add Autoscaling implementation
> 
>
> Key: FLINK-30260
> URL: https://issues.apache.org/jira/browse/FLINK-30260
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Major
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-271%3A+Autoscaling



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2022-12-01 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641802#comment-17641802
 ] 

Ran Tao commented on FLINK-30259:
-

Hi, [~martijnvisser] can u take a look for this?  i’m glad to improve it.

> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Formats (JSON, Avro, Parquet, 
> ORC, SequenceFile), Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Ran Tao
>Priority: Major
>
> The code of some modules of the current Flink project uses the 'assert' 
> keyword of java to do checking, which actually depends on the enablement of 
> the -enableassertions (-ea) option (default is false), otherwise it may lead 
> to unexpected behavior. In fact, flink already has a mature Preconditions 
> tool, we can use it to replace 'assert' keyword. it is more clean and 
> consistent with flink.
> The following is an example of some patch. (by using idea, we can fix it 
> easily.)
> RowDataPrintFunction
> {code:java}
> @Override
> public void invoke(RowData value, Context context) {
> Object data = converter.toExternal(value);
> assert data != null;
> writer.write(data.toString());
> }
> {code}
> KubernetesUtils
> {code:java}
> public static KubernetesConfigMap checkConfigMaps(
> List configMaps, String 
> expectedConfigMapName) {
> assert (configMaps.size() == 1);
> assert (configMaps.get(0).getName().equals(expectedConfigMapName));
> return configMaps.get(0);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30260) FLIP-271: Add initial Autoscaling implementation

2022-12-01 Thread Maximilian Michels (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels updated FLINK-30260:
---
Summary: FLIP-271: Add initial Autoscaling implementation   (was: FLIP-271: 
Add Autoscaling implementation)

> FLIP-271: Add initial Autoscaling implementation 
> -
>
> Key: FLINK-30260
> URL: https://issues.apache.org/jira/browse/FLINK-30260
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Major
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-271%3A+Autoscaling



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2022-12-01 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Description: 
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some patch. (by using idea, we can fix it 
easily.)

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}





  was:
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false), otherwise it may lead to 
unexpected behavior. In fact, flink already has a mature Preconditions tool, we 
can use it to replace 'assert' keyword. it is more clean and consistent with 
flink.

The following is an example of some patch. (by using idea, we can fix it 
easily.)

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}






> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Formats (JSON, Avro, Parquet, 
> ORC, SequenceFile), Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Ran Tao
>Priority: Major
>
> The code of some modules of the current Flink project uses the 'assert' 
> keyword of java to do checking, which actually depends on the enablement of 
> the -enableassertions (-ea) option (default is false, which means some assert 
> code can not work), otherwise it may lead to unexpected behavior. In fact, 
> flink already has a mature Preconditions tool, we can use it to replace 
> 'assert' keyword. it is more clean and consistent with flink.
> The following is an example of some patch. (by using idea, we can fix it 
> easily.)
> RowDataPrintFunction
> {code:java}
> @Override
> public void invoke(RowData value, Context context) {
> Object data = converter.toExternal(value);
> assert data != null;
> writer.write(data.toString());
> }
> {code}
> KubernetesUtils
> {code:java}
> public static KubernetesConfigMap checkConfigMaps(
> List configMaps, String 
> expectedConfigMapName) {
> assert (configMaps.size() == 1);
> assert (configMaps.get(0).getName().equals(expectedConfigMapName));
> return configMaps.get(0);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2022-12-01 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641803#comment-17641803
 ] 

Martijn Visser commented on FLINK-30259:


[~chesnay] You're better suited to answer this 

> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Formats (JSON, Avro, Parquet, 
> ORC, SequenceFile), Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Ran Tao
>Priority: Major
>
> The code of some modules of the current Flink project uses the 'assert' 
> keyword of java to do checking, which actually depends on the enablement of 
> the -enableassertions (-ea) option (default is false, which means some assert 
> code can not work), otherwise it may lead to unexpected behavior. In fact, 
> flink already has a mature Preconditions tool, we can use it to replace 
> 'assert' keyword. it is more clean and consistent with flink.
> The following is an example of some patch. (by using idea, we can fix it 
> easily.)
> RowDataPrintFunction
> {code:java}
> @Override
> public void invoke(RowData value, Context context) {
> Object data = converter.toExternal(value);
> assert data != null;
> writer.write(data.toString());
> }
> {code}
> KubernetesUtils
> {code:java}
> public static KubernetesConfigMap checkConfigMaps(
> List configMaps, String 
> expectedConfigMapName) {
> assert (configMaps.size() == 1);
> assert (configMaps.get(0).getName().equals(expectedConfigMapName));
> return configMaps.get(0);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2022-12-01 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Description: 
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some patch. (by using idea, we can fix it 
easily.)

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}

e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes())  will 
cause NPE.







  was:
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some patch. (by using idea, we can fix it 
easily.)

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}






> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Formats (JSON, Avro, Parquet, 
> ORC, SequenceFile), Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Ran Tao
>Priority: Major
>
> The code of some modules of the current Flink project uses the 'assert' 
> keyword of java to do checking, which actually depends on the enablement of 
> the -enableassertions (-ea) option (default is false, which means some assert 
> code can not work), otherwise it may lead to unexpected behavior. In fact, 
> flink already has a mature Preconditions tool, we can use it to replace 
> 'assert' keyword. it is more clean and consistent with flink.
> The following is an example of some patch. (by using idea, we can fix it 
> easily.)
> RowDataPrintFunction
> {code:java}
> @Override
> public void invoke(RowData value, Context context) {
> Object data = converter.toExternal(value);
> assert data != null;
> writer.write(data.toString());
> }
> {code}
> KubernetesUtils
> {code:java}
> public static KubernetesConfigMap checkConfigMaps(
> List configMaps, String 
> expectedConfigMapName) {
> assert (configMaps.size() == 1);
> assert (configMaps.get(0).getName().equals(expectedConfigMapName));
> return configMap

[jira] [Updated] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2022-12-01 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Description: 
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some patch. (by using idea, we can fix it 
easily.)

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}

e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes())  will 
cause NPE.







  was:
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some patch. (by using idea, we can fix it 
easily.)

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}

e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes())  will 
cause NPE.








> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Formats (JSON, Avro, Parquet, 
> ORC, SequenceFile), Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Ran Tao
>Priority: Major
>
> The code of some modules of the current Flink project us

[jira] [Updated] (FLINK-30259) Use flink Preconditions Util instead of uncertain Assert keyword to do checking

2022-12-01 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-30259:

Description: 
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some snippets (by using idea, we can find other 
places ). 

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}

e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes())  will 
cause NPE.







  was:
The code of some modules of the current Flink project uses the 'assert' keyword 
of java to do checking, which actually depends on the enablement of the 
-enableassertions (-ea) option (default is false, which means some assert code 
can not work), otherwise it may lead to unexpected behavior. In fact, flink 
already has a mature Preconditions tool, we can use it to replace 'assert' 
keyword. it is more clean and consistent with flink.

The following is an example of some patch. (by using idea, we can fix it 
easily.)

RowDataPrintFunction
{code:java}
@Override
public void invoke(RowData value, Context context) {
Object data = converter.toExternal(value);
assert data != null;
writer.write(data.toString());
}
{code}
e.g. if assert not enable,data.toString() will cause NPE.

KubernetesUtils
{code:java}
public static KubernetesConfigMap checkConfigMaps(
List configMaps, String expectedConfigMapName) 
{
assert (configMaps.size() == 1);
assert (configMaps.get(0).getName().equals(expectedConfigMapName));
return configMaps.get(0);
}
{code}
e.g. if assert not enable,configMaps.get(0)will cause NPE.

RocksDBOperationUtils
{code:java}
if (memoryConfig.isUsingFixedMemoryPerSlot()) {
assert memoryConfig.getFixedMemoryPerSlot() != null;

logger.info("Getting fixed-size shared cache for RocksDB.");
return memoryManager.getExternalSharedMemoryResource(
FIXED_SLOT_MEMORY_RESOURCE_ID,
allocator,
// if assert not enable,  here will cause NPE.
memoryConfig.getFixedMemoryPerSlot().getBytes());
} else {
logger.info("Getting managed memory shared cache for RocksDB.");
return memoryManager.getSharedMemoryResourceForManagedMemory(
MANAGED_MEMORY_RESOURCE_ID, allocator, memoryFraction);
}
{code}

e.g. if assert not enable, 
RocksDBOperationUtils#memoryConfig.getFixedMemoryPerSlot().getBytes())  will 
cause NPE.








> Use flink Preconditions Util instead of uncertain Assert keyword to do 
> checking
> ---
>
> Key: FLINK-30259
> URL: https://issues.apache.org/jira/browse/FLINK-30259
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Formats (JSON, Avro, Parquet, 
> ORC, SequenceFile), Table SQL / Planner
>Affects Versions: 1.

[GitHub] [flink] Zakelly commented on a diff in pull request #21362: [FLINK-29430] Add sanity check when setCurrentKeyGroupIndex

2022-12-01 Thread GitBox


Zakelly commented on code in PR #21362:
URL: https://github.com/apache/flink/pull/21362#discussion_r1036946214


##
flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/InternalKeyContextImpl.java:
##
@@ -72,6 +73,10 @@ public void setCurrentKey(@Nonnull K currentKey) {
 
 @Override
 public void setCurrentKeyGroupIndex(int currentKeyGroupIndex) {
+if (!keyGroupRange.contains(currentKeyGroupIndex)) {
+throw KeyGroupRangeOffsets.newIllegalKeyGroupException(
+currentKeyGroupIndex, keyGroupRange);
+}

Review Comment:
   Since we agree to keep this change as it is, would you please approve and 
merge this PR? Or do you have any other comments? Thanks @rkhachatryan @Myasuka 
. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-hbase] ferenc-csaky commented on a diff in pull request #2: [FLINK-30062][Connectors/HBase] Adapt connector code to external repo

2022-12-01 Thread GitBox


ferenc-csaky commented on code in PR #2:
URL: 
https://github.com/apache/flink-connector-hbase/pull/2#discussion_r1036960125


##
pom.xml:
##
@@ -0,0 +1,540 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
+
+   4.0.0
+
+   
+   io.github.zentol.flink
+   flink-connector-parent
+   1.0
+   
+
+   org.apache.flink
+   flink-connector-hbase-parent
+   1.0-SNAPSHOT
+
+   Flink : Connectors : HBase Parent
+   pom
+   https://flink.apache.org
+   2022
+
+   
+   
+   The Apache Software License, Version 2.0
+   
https://www.apache.org/licenses/LICENSE-2.0.txt
+   repo
+   
+   
+
+   
+   https://github.com/apache/flink-connector-hbase
+   
g...@github.com:apache/flink-connector-hbase.git
+   
scm:git:https://gitbox.apache.org/repos/asf/flink-connector-hbase.git
+   
+
+   
+   1.16.0
+   15.0
+
+   2.12
+   2.12.7
+
+   3.21.0
+   2.8.5
+   1.4.3
+   2.2.3
+   4.5.13
+   4.4.14
+   2.13.4.20221013
+   1.3.9
+   4.13.2
+   5.8.1
+   2.24.0
+   4.1.70.Final
+   3.4.14
+
+   1.7.36
+   2.17.2
+
+   
+   
flink-connector-hbase-parent
+   
+
+   
+   flink-connector-hbase-base
+   flink-connector-hbase-1.4
+   flink-connector-hbase-2.2
+   flink-sql-connector-hbase-1.4
+   flink-sql-connector-hbase-2.2
+   
+
+   
+   
+   org.apache.flink
+   flink-shaded-force-shading
+   

Review Comment:
   I did this based on the root POM of the `flink-connector-elasticsearch` 
repo. Now I checked it thoroughly and I think this is needed there because of 
some spdecific elasticsearch related error, according to the shade plugin 
config in `flink-connector-parent`. I remove it from here, cause the artifacts 
have the same content regardless of its presence.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-30261) PartiallyFinishedSourcesITCase.test timed out while waiting for tasks to finish

2022-12-01 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-30261:
-

 Summary: PartiallyFinishedSourcesITCase.test timed out while 
waiting for tasks to finish
 Key: FLINK-30261
 URL: https://issues.apache.org/jira/browse/FLINK-30261
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.15.3
Reporter: Matthias Pohl


{{PartiallyFinishedSourcesITCase.test}} timed out while waiting for tasks to 
finish:
{code}
"main" #1 prio=5 os_prio=0 tid=0x7fe78800b800 nid=0x3a11f waiting on 
condition [0x7fe78e6fb000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:138)
at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitForSubtasksToFinish(CommonTestUtils.java:291)
at 
org.apache.flink.runtime.operators.lifecycle.TestJobExecutor.waitForSubtasksToFinish(TestJobExecutor.java:226)
at 
org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase.test(PartiallyFinishedSourcesITCase.java:138)
[...]
{code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43637&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=39302



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28319) ResumeCheckpointManuallyITCase gets stuck

2022-12-01 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641832#comment-17641832
 ] 

Matthias Pohl commented on FLINK-28319:
---

> 1.15: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43637&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=21940

The aforementioned build failure is actually not related to FLINK-28319. I 
created FLINK-30261 to cover the test failure.

> ResumeCheckpointManuallyITCase gets stuck
> -
>
> Key: FLINK-28319
> URL: https://issues.apache.org/jira/browse/FLINK-28319
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.15.3
>Reporter: Martijn Visser
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Jun 30 03:16:25 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 4.109 s - in 
> org.apache.flink.test.streaming.experimental.CollectITCase
> ==
> === WARNING: This task took already 95% of the available time budget of 237 
> minutes ===
> ==
> ==
> The following Java processes are running (JPS)
> ==
> 932 Launcher
> 20281 Jps
> 17930 surefirebooter3147893032508885212.jar
> ==
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37384&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba&l=6280



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-28319) ResumeCheckpointManuallyITCase gets stuck

2022-12-01 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641832#comment-17641832
 ] 

Matthias Pohl edited comment on FLINK-28319 at 12/1/22 11:04 AM:
-

{quote}
1.15: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43637&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=21940
{quote}
The aforementioned build failure is actually not related to FLINK-28319. I 
created FLINK-30261 to cover the test failure.


was (Author: mapohl):
> 1.15: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43637&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=21940

The aforementioned build failure is actually not related to FLINK-28319. I 
created FLINK-30261 to cover the test failure.

> ResumeCheckpointManuallyITCase gets stuck
> -
>
> Key: FLINK-28319
> URL: https://issues.apache.org/jira/browse/FLINK-28319
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.15.3
>Reporter: Martijn Visser
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Jun 30 03:16:25 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 4.109 s - in 
> org.apache.flink.test.streaming.experimental.CollectITCase
> ==
> === WARNING: This task took already 95% of the available time budget of 237 
> minutes ===
> ==
> ==
> The following Java processes are running (JPS)
> ==
> 932 Launcher
> 20281 Jps
> 17930 surefirebooter3147893032508885212.jar
> ==
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37384&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba&l=6280



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29634) Support periodic checkpoint triggering

2022-12-01 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641837#comment-17641837
 ] 

Piotr Nowojski commented on FLINK-29634:


Oh, thanks for correcting me [~thw]. Yes, I think this makes sense to me in 
that case (y)

> Support periodic checkpoint triggering
> --
>
> Key: FLINK-29634
> URL: https://issues.apache.org/jira/browse/FLINK-29634
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Thomas Weise
>Assignee: Jiale Tan
>Priority: Major
>
> Similar to the support for periodic savepoints, the operator should support 
> triggering periodic checkpoints to break the incremental checkpoint chain.
> Support for external triggering will come with 1.17: 
> https://issues.apache.org/jira/browse/FLINK-27101 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] rmetzger commented on pull request #21436: [FLINK-30093] [formats] Protobuf Timestamp Compile Error

2022-12-01 Thread GitBox


rmetzger commented on PR #21436:
URL: https://github.com/apache/flink/pull/21436#issuecomment-1333609703

   Thx for the PR. Can you also update the commit message, for example to the 
PR title?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] rmetzger commented on pull request #21436: [FLINK-30093] [formats] Protobuf Timestamp Compile Error

2022-12-01 Thread GitBox


rmetzger commented on PR #21436:
URL: https://github.com/apache/flink/pull/21436#issuecomment-1333611549

   The CI failure seems to be caused by a known instability: 
https://issues.apache.org/jira/browse/FLINK-30257


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-30257) SqlClientITCase#testMatchRecognize failed

2022-12-01 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641844#comment-17641844
 ] 

Robert Metzger commented on FLINK-30257:


Another case: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43633&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a

> SqlClientITCase#testMatchRecognize failed
> -
>
> Key: FLINK-30257
> URL: https://issues.apache.org/jira/browse/FLINK-30257
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.17.0
>Reporter: Martijn Visser
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Nov 30 21:54:41 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 224.683 s <<< FAILURE! - in SqlClientITCase
> Nov 30 21:54:41 [ERROR] SqlClientITCase.testMatchRecognize  Time elapsed: 
> 50.164 s  <<< FAILURE!
> Nov 30 21:54:41 org.opentest4j.AssertionFailedError: 
> Nov 30 21:54:41 
> Nov 30 21:54:41 expected: 1
> Nov 30 21:54:41  but was: 0
> Nov 30 21:54:41   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Nov 30 21:54:41   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Nov 30 21:54:41   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Nov 30 21:54:41   at 
> SqlClientITCase.verifyNumberOfResultRecords(SqlClientITCase.java:297)
> Nov 30 21:54:41   at 
> SqlClientITCase.testMatchRecognize(SqlClientITCase.java:255)
> Nov 30 21:54:41   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Nov 30 21:54:41   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Nov 30 21:54:41   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Nov 30 21:54:41   at java.lang.reflect.Method.invoke(Method.java:498)
> Nov 30 21:54:41   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMetho
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43635&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a&l=14817



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29427) LookupJoinITCase failed with classloader problem

2022-12-01 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641846#comment-17641846
 ] 

Matthias Pohl commented on FLINK-29427:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43642&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=21331

> LookupJoinITCase failed with classloader problem
> 
>
> Key: FLINK-29427
> URL: https://issues.apache.org/jira/browse/FLINK-29427
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Alexander Smirnov
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> {code:java}
> 2022-09-27T02:49:20.9501313Z Sep 27 02:49:20 Caused by: 
> org.codehaus.janino.InternalCompilerException: Compiling 
> "KeyProjection$108341": Trying to access closed classloader. Please check if 
> you store classloaders directly or indirectly in static fields. If the 
> stacktrace suggests that the leak occurs in a third party library and cannot 
> be fixed immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> 2022-09-27T02:49:20.9502654Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:382)
> 2022-09-27T02:49:20.9503366Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:237)
> 2022-09-27T02:49:20.9504044Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:465)
> 2022-09-27T02:49:20.9504704Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:216)
> 2022-09-27T02:49:20.9505341Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:207)
> 2022-09-27T02:49:20.9505965Z Sep 27 02:49:20  at 
> org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80)
> 2022-09-27T02:49:20.9506584Z Sep 27 02:49:20  at 
> org.codehaus.commons.compiler.Cookable.cook(Cookable.java:75)
> 2022-09-27T02:49:20.9507261Z Sep 27 02:49:20  at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:104)
> 2022-09-27T02:49:20.9507883Z Sep 27 02:49:20  ... 30 more
> 2022-09-27T02:49:20.9509266Z Sep 27 02:49:20 Caused by: 
> java.lang.IllegalStateException: Trying to access closed classloader. Please 
> check if you store classloaders directly or indirectly in static fields. If 
> the stacktrace suggests that the leak occurs in a third party library and 
> cannot be fixed immediately, you can disable this check with the 
> configuration 'classloader.check-leaked-classloader'.
> 2022-09-27T02:49:20.9510835Z Sep 27 02:49:20  at 
> org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:184)
> 2022-09-27T02:49:20.9511760Z Sep 27 02:49:20  at 
> org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.loadClass(FlinkUserCodeClassLoaders.java:192)
> 2022-09-27T02:49:20.9512456Z Sep 27 02:49:20  at 
> java.lang.Class.forName0(Native Method)
> 2022-09-27T02:49:20.9513014Z Sep 27 02:49:20  at 
> java.lang.Class.forName(Class.java:348)
> 2022-09-27T02:49:20.9513649Z Sep 27 02:49:20  at 
> org.codehaus.janino.ClassLoaderIClassLoader.findIClass(ClassLoaderIClassLoader.java:89)
> 2022-09-27T02:49:20.9514339Z Sep 27 02:49:20  at 
> org.codehaus.janino.IClassLoader.loadIClass(IClassLoader.java:312)
> 2022-09-27T02:49:20.9514990Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.findTypeByName(UnitCompiler.java:8556)
> 2022-09-27T02:49:20.9515659Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6749)
> 2022-09-27T02:49:20.9516337Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6594)
> 2022-09-27T02:49:20.9516989Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6573)
> 2022-09-27T02:49:20.9517632Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.access$13900(UnitCompiler.java:215)
> 2022-09-27T02:49:20.9518319Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22$1.visitReferenceType(UnitCompiler.java:6481)
> 2022-09-27T02:49:20.9519018Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22$1.visitReferenceType(UnitCompiler.java:6476)
> 2022-09-27T02:49:20.9519680Z Sep 27 02:49:20  at 
> org.codehaus.janino.Java$ReferenceType.accept(Java.java:3928)
> 2022-09-27T02:49:20.9520386Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22.visitType(UnitCompiler.java:6476)
> 2022-09-27T02:49:20.9521042Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22.visitType(UnitCompiler.java:6469)
> 2022-09-27T02:49:20.9521677Z S

[jira] [Created] (FLINK-30262) UpsertKafkaTableITCase failed when starting the container because waiting for a port timed out

2022-12-01 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-30262:
-

 Summary: UpsertKafkaTableITCase failed when starting the container 
because waiting for a port timed out
 Key: FLINK-30262
 URL: https://issues.apache.org/jira/browse/FLINK-30262
 Project: Flink
  Issue Type: Bug
  Components: Build System, Connectors / Kafka, Test Infrastructure
Affects Versions: 1.16.0
Reporter: Matthias Pohl


{code:java}
Dec 01 08:35:00 Caused by: 
org.testcontainers.containers.ContainerLaunchException: Timed out waiting for 
container port to open (172.17.0.1 ports: [60109, 60110] should be listening)
Dec 01 08:35:00 at 
org.testcontainers.containers.wait.strategy.HostPortWaitStrategy.waitUntilReady(HostPortWaitStrategy.java:90)
Dec 01 08:35:00 at 
org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:51)
Dec 01 08:35:00 at 
org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:926)
Dec 01 08:35:00 at 
org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:480)
Dec 01 08:35:00 ... 33 more
 {code}
 

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43643&view=logs&j=aa18c3f6-13b8-5f58-86bb-c1cffb239496&t=502fb6c0-30a2-5e49-c5c2-a00fa3acb203&l=37366



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] luoyuxia commented on pull request #21290: [FLINK-29980] Handle partition keys directly in hive bulk format

2022-12-01 Thread GitBox


luoyuxia commented on PR #21290:
URL: https://github.com/apache/flink/pull/21290#issuecomment-1333634978

   Thanks for contribution,  I'll have a look when I'm free.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] boring-cyborg[bot] commented on pull request #3: [FLINK-29830][Connector/Pulsar] Create the topic with schema before consuming messages in PulsarSinkITCase.

2022-12-01 Thread GitBox


boring-cyborg[bot] commented on PR #3:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/3#issuecomment-1333639936

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dannycranmer commented on pull request #21407: [FLINK-30224][Connectors/Kinesis] An IT test for slow FlinKinesisConsumer's run() which caused an NPE in close

2022-12-01 Thread GitBox


dannycranmer commented on PR #21407:
URL: https://github.com/apache/flink/pull/21407#issuecomment-1333640951

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-30116) Don't Show Env Vars in Web UI

2022-12-01 Thread ConradJam (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641857#comment-17641857
 ] 

ConradJam commented on FLINK-30116:
---

I was separated by something. It's done this week [~chesnay] 

> Don't Show Env Vars in Web UI
> -
>
> Key: FLINK-30116
> URL: https://issues.apache.org/jira/browse/FLINK-30116
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.16.0
>Reporter: Konstantin Knauf
>Assignee: ConradJam
>Priority: Blocker
> Fix For: 1.17.0, 1.16.1
>
>
> As discussed and agreed upon in [1], we'd like to revert [2] and not show any 
> environment variables in the Web UI for security reasons. 
> [1] https://lists.apache.org/thread/rjgk15bqttvblp60zry4n5pw4xjw7q9k 
> [2] https://issues.apache.org/jira/browse/FLINK-28311



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] reswqa commented on pull request #21437: [FLINK-27944] Input channel metric will no longer be registered multiple times

2022-12-01 Thread GitBox


reswqa commented on PR #21437:
URL: https://github.com/apache/flink/pull/21437#issuecomment-1333642182

   @pnowojski Thank you very much for reminding me. 
   I'm sorry for forgetting this. I still keep two commits, but add Xiaogang as 
a co-author on the second one.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #21411: [BP-1.16][FLINK-29830][Connector/Pulsar] Create the topic with schema before consuming messages in PulsarSinkITCase. (#21252)

2022-12-01 Thread GitBox


XComp commented on PR #21411:
URL: https://github.com/apache/flink/pull/21411#issuecomment-1333645254

   But what's puzzling me now. Shouldn't that change already be in the external 
repository? Or didn't you base the external repository on 
`apache/flink:master`? :thinking: 
   
   The original change made it to `apache/flink:master` with PR #21252 20 days 
ago.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gaborgsomogyi opened a new pull request, #459: [FLINK-29974] Allow session job cancel to be called for each job state

2022-12-01 Thread GitBox


gaborgsomogyi opened a new pull request, #459:
URL: https://github.com/apache/flink-kubernetes-operator/pull/459

   ## What is the purpose of the change
   
   When session jobs are in a specific states and cancel called they throw 
exceptions. In this PR I've added test for each state and changed the code to 
handle states correctly.
   
   ## Brief change log
   
   * Allow session job cancel to be called for each job state
   * Minor refactor in the app cancel area
   
   ## Verifying this change
   Existing + additional unit tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
no
 - Core observer or reconciler logic that is regularly executed: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gaborgsomogyi commented on pull request #459: [FLINK-29974] Allow session job cancel to be called for each job state

2022-12-01 Thread GitBox


gaborgsomogyi commented on PR #459:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/459#issuecomment-1333650021

   cc @gyfora 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-30263) Introduce schemas table to table store

2022-12-01 Thread Shammon (Jira)
Shammon created FLINK-30263:
---

 Summary: Introduce schemas table to table store
 Key: FLINK-30263
 URL: https://issues.apache.org/jira/browse/FLINK-30263
 Project: Flink
  Issue Type: New Feature
  Components: Table Store
Affects Versions: table-store-0.3.0
Reporter: Shammon


You can query the historical schemas of the table through SQL, for example, 
query the historical schemas of table "T" through the following SQL:

SELECT * FROM T$schemas;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] MartijnVisser commented on pull request #21411: [BP-1.16][FLINK-29830][Connector/Pulsar] Create the topic with schema before consuming messages in PulsarSinkITCase. (#21252)

2022-12-01 Thread GitBox


MartijnVisser commented on PR #21411:
URL: https://github.com/apache/flink/pull/21411#issuecomment-1333684866

   > Or didn't you base the external repository on `apache/flink:master`? 🤔
   
   Correct, because `master` has a breaking change in 1.17 (see 
https://issues.apache.org/jira/browse/FLINK-28853). So the sync was based on 
`release-1.16`, that's now being released as 3.0. When that release is 
completed, we'll merge https://github.com/apache/flink-connector-pulsar/pull/2 
which contains the remaining changes between `release-1.16` and `master`. That 
can only be released when Flink 1.17 is released and will be released as 4.0. 
   
   This is the only way we can get the Pulsar connector code out of 
`apache/flink:master` during the 1.17 release cycle. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-30263) Introduce schemas table to table store

2022-12-01 Thread Shammon (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shammon updated FLINK-30263:

Parent: FLINK-29735
Issue Type: Sub-task  (was: New Feature)

> Introduce schemas table to table store
> --
>
> Key: FLINK-30263
> URL: https://issues.apache.org/jira/browse/FLINK-30263
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Shammon
>Priority: Major
>
> You can query the historical schemas of the table through SQL, for example, 
> query the historical schemas of table "T" through the following SQL:
> SELECT * FROM T$schemas;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30263) Introduce schemas meta table

2022-12-01 Thread Shammon (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shammon updated FLINK-30263:

Summary: Introduce schemas meta table  (was: Introduce schemas table to 
table store)

> Introduce schemas meta table
> 
>
> Key: FLINK-30263
> URL: https://issues.apache.org/jira/browse/FLINK-30263
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.3.0
>Reporter: Shammon
>Priority: Major
>
> You can query the historical schemas of the table through SQL, for example, 
> query the historical schemas of table "T" through the following SQL:
> SELECT * FROM T$schemas;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30158) [Flink SQL][Protobuf] NullPointerException when querying Kafka topic using repeated or map attributes

2022-12-01 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641895#comment-17641895
 ] 

Benchao Li commented on FLINK-30158:


bq. Even if I commented out the second column in the table schema, I still got 
the same NullPointerException

I'm still confused about current situation. If you only use
{code:sql}
CREATE TABLE TestMessages (
  `first` array
) ...
{code}
I believe this is a very common case and [tested 
well|https://github.com/apache/flink/blob/c65591d4109f39dfa6a5b5f945c46f97dc5d967c/flink-formats/flink-protobuf/src/test/proto/test_repeated.proto#L26].

Could you give us a minimal reproducible test, including the following things:
1. the full DDL and query
2. the jar which includes your compiled protobuf classes
3. the version of protoc you are using
4. the protobuf schema

> [Flink SQL][Protobuf] NullPointerException when querying Kafka topic using 
> repeated or map attributes
> -
>
> Key: FLINK-30158
> URL: https://issues.apache.org/jira/browse/FLINK-30158
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.16.0
>Reporter: James Mcguire
>Priority: Major
>
> I am encountering a {{java.lang.NullPointerException}} exception when trying 
> to use Flink SQL to query a kafka topic that uses either {{repeated}} and/or 
> {{map}} attributes.
>  
> {*}{*}{*}Replication{*} *steps*
>  # Use a protobuf definition that either uses repeated and/or map.  This 
> protobuf schema should cover a few of the problematic scenarios I ran into:
>  
> {code:java}
> syntax = "proto3";
> package example.message;
> option java_package = "com.example.message";
> option java_multiple_files = true;
> message NestedType {
>   int64 nested_first = 1;
>   oneof nested_second {
> int64 one_of_first = 2;
> string one_of_second = 3;
>   }
> }
> message Test {
>   repeated int64 first = 1;
>   map second = 2;
> } {code}
> 2. Attempt query on topic, even excluding problematic columns:
>  
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.formats.protobuf.PbCodegenException: 
> java.lang.NullPointerException{code}
>  
>  
> log file:
>  
> {code:java}
> 2022-11-22 15:33:59,510 WARN  org.apache.flink.table.client.cli.CliClient 
>  [] - Could not execute SQL 
> statement.org.apache.flink.table.client.gateway.SqlExecutionException: Error 
> while retrieving result.at 
> org.apache.flink.table.client.gateway.local.result.CollectResultBase$ResultRetrievalThread.run(CollectResultBase.java:79)
>  ~[flink-sql-client-1.16.0.jar:1.16.0]Caused by: java.lang.RuntimeException: 
> Failed to fetch next resultat 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.table.planner.connectors.CollectDynamicSink$CloseableRowIteratorWrapper.hasNext(CollectDynamicSink.java:222)
>  ~[?:?]at 
> org.apache.flink.table.client.gateway.local.result.CollectResultBase$ResultRetrievalThread.run(CollectResultBase.java:75)
>  ~[flink-sql-client-1.16.0.jar:1.16.0]Caused by: java.io.IOException: Failed 
> to fetch job execution resultat 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:184)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:121)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.table.planner.connectors.CollectDynamicSink$CloseableRowIteratorWrapper.hasNext(CollectDynamicSink.java:222)
>  ~[?:?]at 
> org.apache.flink.table.client.gateway.local.result.CollectResultBase$ResultRetrievalThread.run(CollectResultBase.java:75)
>  ~[flink-sql-client-1.16.0.jar:1.16.0]Caused by: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.client.program.ProgramInvocationException: Job failed 
> (JobID: bc869097009a92d0601add881a6b920c)at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395) 
> ~[?:?]at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:202

[GitHub] [flink-connector-hbase] ferenc-csaky commented on a diff in pull request #2: [FLINK-30062][Connectors/HBase] Adapt connector code to external repo

2022-12-01 Thread GitBox


ferenc-csaky commented on code in PR #2:
URL: 
https://github.com/apache/flink-connector-hbase/pull/2#discussion_r1037073397


##
pom.xml:
##
@@ -0,0 +1,540 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
+
+   4.0.0
+
+   
+   io.github.zentol.flink
+   flink-connector-parent
+   1.0
+   
+
+   org.apache.flink
+   flink-connector-hbase-parent
+   1.0-SNAPSHOT
+
+   Flink : Connectors : HBase Parent
+   pom
+   https://flink.apache.org
+   2022
+
+   
+   
+   The Apache Software License, Version 2.0
+   
https://www.apache.org/licenses/LICENSE-2.0.txt
+   repo
+   
+   
+
+   
+   https://github.com/apache/flink-connector-hbase
+   
g...@github.com:apache/flink-connector-hbase.git
+   
scm:git:https://gitbox.apache.org/repos/asf/flink-connector-hbase.git
+   
+
+   
+   1.16.0
+   15.0
+
+   2.12
+   2.12.7
+
+   3.21.0
+   2.8.5
+   1.4.3
+   2.2.3
+   4.5.13
+   4.4.14
+   2.13.4.20221013
+   1.3.9
+   4.13.2
+   5.8.1
+   2.24.0
+   4.1.70.Final
+   3.4.14
+
+   1.7.36
+   2.17.2
+
+   
+   
flink-connector-hbase-parent
+   
+
+   
+   flink-connector-hbase-base
+   flink-connector-hbase-1.4
+   flink-connector-hbase-2.2
+   flink-sql-connector-hbase-1.4
+   flink-sql-connector-hbase-2.2
+   
+
+   
+   
+   org.apache.flink
+   flink-shaded-force-shading
+   
+
+   
+   
+   org.junit.jupiter
+   junit-jupiter
+   test
+   
+
+   
+   org.assertj
+   assertj-core
+   test
+   
+
+   
+   
+   org.apache.logging.log4j
+   log4j-slf4j-impl
+   test
+   
+
+   
+   org.apache.logging.log4j
+   log4j-api
+   test
+   
+
+   
+   org.apache.logging.log4j
+   log4j-core
+   test
+   
+
+   
+   
+   org.apache.logging.log4j
+   log4j-1.2-api
+   test
+   
+
+   
+   org.apache.flink
+   flink-test-utils
+   test
+   
+   
+
+   
+   
+   
+   org.apache.flink
+   flink-core
+   ${flink.version}
+   
+
+   
+   org.apache.flink
+   flink-streaming-java
+   ${flink.version}
+   
+
+   
+   org.apache.flink
+   flink-table-common
+   ${flink.version}
+   test-jar
+   
+
+   
+   org.apache.flink
+   
flink-table-api-java-bridge
+   ${flink.version}
+   
+
+   
+   org.apache.flink
+   
flink-table-planner_${scala.binary.version}
+   ${flink.version}
+   
+
+   
+   org.apache.flink
+   
flink-table-planner_${scala.binary.version}
+   ${flink.version}
+   test-jar
+   
+
+   
+   org.apache.flink
+   
flink-hadoop-compatibility_${scala.binary.version}
+   ${flink.version}
+   
+
+   
+   org.apache.flink
+   
flink-shaded-force-shading
+   ${flink.shaded.version}
+

[GitHub] [flink] zentol opened a new pull request, #21438: [FLINK-29420][build] Upgrade Zookeeper to 3.7.1

2022-12-01 Thread GitBox


zentol opened a new pull request, #21438:
URL: https://github.com/apache/flink/pull/21438

   Upgrade to the latest stable Zookeeper release. This client is compatible 
with 3.5 onwards.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29420) Upgrade Zookeeper to 3.7

2022-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29420:
---
Labels: pull-request-available  (was: )

> Upgrade Zookeeper to 3.7
> 
>
> Key: FLINK-29420
> URL: https://issues.apache.org/jira/browse/FLINK-29420
> Project: Flink
>  Issue Type: Improvement
>  Components: BuildSystem / Shaded, Runtime / Coordination
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30264) Set session job status to FAILED if already have retried max attempts

2022-12-01 Thread Xin Hao (Jira)
Xin Hao created FLINK-30264:
---

 Summary: Set session job status to FAILED if already have retried 
max attempts
 Key: FLINK-30264
 URL: https://issues.apache.org/jira/browse/FLINK-30264
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Reporter: Xin Hao


Sometimes, the session job deployment fails because of the user code.

There is nothing the Flink operator can do to fix the failure.

 

So can we add a new reconciliation state *FAILED* and set the status to this if 
the failure still exists after we have retried for the max attempts?

The reconciliation status will continuously be *UPGRADING* currently.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #21438: [FLINK-29420][build] Upgrade Zookeeper to 3.7.1

2022-12-01 Thread GitBox


flinkbot commented on PR #21438:
URL: https://github.com/apache/flink/pull/21438#issuecomment-1333737839

   
   ## CI report:
   
   * fe9e8639ac54e1ca4e21fdfec43ef590fa58bd0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] (FLINK-28319) ResumeCheckpointManuallyITCase gets stuck

2022-12-01 Thread Matthias Pohl (Jira)


[ https://issues.apache.org/jira/browse/FLINK-28319 ]


Matthias Pohl deleted comment on FLINK-28319:
---

was (Author: mapohl):
{quote}
1.15: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43637&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=21940
{quote}
The aforementioned build failure is actually not related to FLINK-28319. I 
created FLINK-30261 to cover the test failure.

> ResumeCheckpointManuallyITCase gets stuck
> -
>
> Key: FLINK-28319
> URL: https://issues.apache.org/jira/browse/FLINK-28319
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.15.3
>Reporter: Martijn Visser
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Jun 30 03:16:25 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 4.109 s - in 
> org.apache.flink.test.streaming.experimental.CollectITCase
> ==
> === WARNING: This task took already 95% of the available time budget of 237 
> minutes ===
> ==
> ==
> The following Java processes are running (JPS)
> ==
> 932 Launcher
> 20281 Jps
> 17930 surefirebooter3147893032508885212.jar
> ==
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37384&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba&l=6280



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30261) PartiallyFinishedSourcesITCase.test timed out while waiting for tasks to finish

2022-12-01 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl closed FLINK-30261.
-
Resolution: Not A Problem

Closing this issue. I interpreted the thread dump in a wrong way. FLINK-28319 
is still the proper issue for that build failure. 
{{PartiallyFinishedSourcesITCase}} only waits for a limited amount of time 
based on the thread dump. For {{ResumeCheckpointManuallyITCase}} instead, the 
threads are in {{WAITING}} state.

> PartiallyFinishedSourcesITCase.test timed out while waiting for tasks to 
> finish
> ---
>
> Key: FLINK-30261
> URL: https://issues.apache.org/jira/browse/FLINK-30261
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.3
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> {{PartiallyFinishedSourcesITCase.test}} timed out while waiting for tasks to 
> finish:
> {code}
> "main" #1 prio=5 os_prio=0 tid=0x7fe78800b800 nid=0x3a11f waiting on 
> condition [0x7fe78e6fb000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:145)
>   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:138)
>   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitForSubtasksToFinish(CommonTestUtils.java:291)
>   at 
> org.apache.flink.runtime.operators.lifecycle.TestJobExecutor.waitForSubtasksToFinish(TestJobExecutor.java:226)
>   at 
> org.apache.flink.runtime.operators.lifecycle.PartiallyFinishedSourcesITCase.test(PartiallyFinishedSourcesITCase.java:138)
> [...]
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43637&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=39302



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] pnowojski merged pull request #21437: [FLINK-27944] Input channel metric will no longer be registered multiple times

2022-12-01 Thread GitBox


pnowojski merged PR #21437:
URL: https://github.com/apache/flink/pull/21437


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27944) IO metrics collision happens if a task has union inputs

2022-12-01 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641908#comment-17641908
 ] 

Piotr Nowojski commented on FLINK-27944:


merged commit 20808fd into apache:master 

> IO metrics collision happens if a task has union inputs
> ---
>
> Key: FLINK-27944
> URL: https://issues.apache.org/jira/browse/FLINK-27944
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.15.0
>Reporter: Zhu Zhu
>Assignee: Weijie Guo
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> When a task has union inputs, some IO metrics(numBytesIn* and numBuffersIn*) 
> of the different inputs may collide and failed to be registered.
>  
> The problem can be reproduced with a simple job like:
> {code:java}
> DataStream source1 = env.fromElements("abc");
> DataStream source2 = env.fromElements("123");
> source1.union(source2).print();{code}
>  
> Logs of collisions:
> {code:java}
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocalPerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocalPerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemote'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemotePerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemote'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemotePerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocalPerSecond'. Metric will not be reported.[, 
> taskmanager, fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to 
> Std. Out, 0, Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersI

[jira] (FLINK-27944) IO metrics collision happens if a task has union inputs

2022-12-01 Thread Piotr Nowojski (Jira)


[ https://issues.apache.org/jira/browse/FLINK-27944 ]


Piotr Nowojski deleted comment on FLINK-27944:


was (Author: pnowojski):
merged commit 20808fd into apache:master 

> IO metrics collision happens if a task has union inputs
> ---
>
> Key: FLINK-27944
> URL: https://issues.apache.org/jira/browse/FLINK-27944
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.15.0
>Reporter: Zhu Zhu
>Assignee: Weijie Guo
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> When a task has union inputs, some IO metrics(numBytesIn* and numBuffersIn*) 
> of the different inputs may collide and failed to be registered.
>  
> The problem can be reproduced with a simple job like:
> {code:java}
> DataStream source1 = env.fromElements("abc");
> DataStream source2 = env.fromElements("123");
> source1.union(source2).print();{code}
>  
> Logs of collisions:
> {code:java}
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocalPerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocalPerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemote'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemotePerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemote'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemotePerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocalPerSecond'. Metric will not be reported.[, 
> taskmanager, fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to 
> Std. Out, 0, Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocalPerSecond'. Metric will not be reported.[, 
> taskmanager, fa9f270e-e9

[GitHub] [flink] XComp commented on pull request #21411: [BP-1.16][FLINK-29830][Connector/Pulsar] Create the topic with schema before consuming messages in PulsarSinkITCase. (#21252)

2022-12-01 Thread GitBox


XComp commented on PR #21411:
URL: https://github.com/apache/flink/pull/21411#issuecomment-1333752687

   thanks for clarification :+1: 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27944) IO metrics collision happens if a task has union inputs

2022-12-01 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-27944:
---
Fix Version/s: 1.17.0
   1.16.1
   1.15.4

> IO metrics collision happens if a task has union inputs
> ---
>
> Key: FLINK-27944
> URL: https://issues.apache.org/jira/browse/FLINK-27944
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.15.0
>Reporter: Zhu Zhu
>Assignee: Weijie Guo
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0, 1.16.1, 1.15.4
>
>
> When a task has union inputs, some IO metrics(numBytesIn* and numBuffersIn*) 
> of the different inputs may collide and failed to be registered.
>  
> The problem can be reproduced with a simple job like:
> {code:java}
> DataStream source1 = env.fromElements("abc");
> DataStream source2 = env.fromElements("123");
> source1.union(source2).print();{code}
>  
> Logs of collisions:
> {code:java}
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocalPerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocalPerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemote'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemotePerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemote'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemotePerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocalPerSecond'. Metric will not be reported.[, 
> taskmanager, fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to 
> Std. Out, 0, Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric wit

[jira] [Commented] (FLINK-27944) IO metrics collision happens if a task has union inputs

2022-12-01 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641916#comment-17641916
 ] 

Piotr Nowojski commented on FLINK-27944:


merged to master as 20808fd^ and 20808fd 
merged to release 1.16 as c589cd5f7c8^ and c589cd5f7c8

[~Weijie Guo], could you prepare a backport to release-1.15 branch? I've 
backported your changes to release-1.16 already, but while doing that for 
release-1.15 I stumbled across some conflicts.

> IO metrics collision happens if a task has union inputs
> ---
>
> Key: FLINK-27944
> URL: https://issues.apache.org/jira/browse/FLINK-27944
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.15.0
>Reporter: Zhu Zhu
>Assignee: Weijie Guo
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0, 1.16.1, 1.15.4
>
>
> When a task has union inputs, some IO metrics(numBytesIn* and numBuffersIn*) 
> of the different inputs may collide and failed to be registered.
>  
> The problem can be reproduced with a simple job like:
> {code:java}
> DataStream source1 = env.fromElements("abc");
> DataStream source2 = env.fromElements("123");
> source1.union(source2).print();{code}
>  
> Logs of collisions:
> {code:java}
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocalPerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,629 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInLocalPerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemote'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemotePerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemote'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBytesInRemotePerSecond'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocal'. Metric will not be reported.[, taskmanager, 
> fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to Std. Out, 0, 
> Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocalPerSecond'. Metric will not be reported.[, 
> taskmanager, fa9f270e-e904-4f69-8227-8d6e26e1be62, WordCount, Sink: Print to 
> Std. Out, 0, Shuffle, Netty, Input]
> 2022-06-08 00:59:01,630 WARN  org.apache.flink.metrics.MetricGroup
>  [] - Name collision: Group already contains a Metric with the 
> name 'numBuffersInLocal'. Me

[GitHub] [flink-connector-pulsar] MartijnVisser merged pull request #3: [BP-v3.0][FLINK-29830][Connector/Pulsar] Create the topic with schema before consuming messages in PulsarSinkITCase.

2022-12-01 Thread GitBox


MartijnVisser merged PR #3:
URL: https://github.com/apache/flink-connector-pulsar/pull/3


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] boring-cyborg[bot] commented on pull request #3: [BP-v3.0][FLINK-29830][Connector/Pulsar] Create the topic with schema before consuming messages in PulsarSinkITCase.

2022-12-01 Thread GitBox


boring-cyborg[bot] commented on PR #3:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/3#issuecomment-1333791651

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29830) PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar failed

2022-12-01 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-29830:
---
Fix Version/s: pulsar-3.0.1

> PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar failed
> --
>
> Key: FLINK-29830
> URL: https://issues.apache.org/jira/browse/FLINK-29830
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.16.0, 1.17.0, 1.15.3
>Reporter: Martijn Visser
>Assignee: Yufan Sheng
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: pulsar-3.0.1
>
>
> {code:java}
> Nov 01 01:28:03 [ERROR] Failures: 
> Nov 01 01:28:03 [ERROR]   
> PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar:140 
> Nov 01 01:28:03 Actual and expected should have same size but actual size is:
> Nov 01 01:28:03   0
> Nov 01 01:28:03 while expected size is:
> Nov 01 01:28:03   115
> Nov 01 01:28:03 Actual was:
> Nov 01 01:28:03   []
> Nov 01 01:28:03 Expected was:
> Nov 01 01:28:03   ["AT_LEAST_ONCE-isxrFGAL-0-kO65unDUKX",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-1-4tBNu1UmeR",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-2-9PTnEahlNU",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-3-GjWqEp21yz",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-4-jnbJr9C0w8",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-5-e8Wacz5yDO",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-6-9cW53j3Zcf",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-7-jk8z3m2Aa5",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-8-VU56KmMeiz",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-9-uvMdFxxDAj",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-10-FQyWfwJFbH",
> ...
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42680&view=logs&j=aa18c3f6-13b8-5f58-86bb-c1cffb239496&t=502fb6c0-30a2-5e49-c5c2-a00fa3acb203&l=37544



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29830) PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar failed

2022-12-01 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17641928#comment-17641928
 ] 

Martijn Visser commented on FLINK-29830:


Fixed in Pulsar external connector repo: 
5162b766bb042705ecafefc773bb2600b7e56263

> PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar failed
> --
>
> Key: FLINK-29830
> URL: https://issues.apache.org/jira/browse/FLINK-29830
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.16.0, 1.17.0, 1.15.3
>Reporter: Martijn Visser
>Assignee: Yufan Sheng
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: pulsar-3.0.1
>
>
> {code:java}
> Nov 01 01:28:03 [ERROR] Failures: 
> Nov 01 01:28:03 [ERROR]   
> PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar:140 
> Nov 01 01:28:03 Actual and expected should have same size but actual size is:
> Nov 01 01:28:03   0
> Nov 01 01:28:03 while expected size is:
> Nov 01 01:28:03   115
> Nov 01 01:28:03 Actual was:
> Nov 01 01:28:03   []
> Nov 01 01:28:03 Expected was:
> Nov 01 01:28:03   ["AT_LEAST_ONCE-isxrFGAL-0-kO65unDUKX",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-1-4tBNu1UmeR",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-2-9PTnEahlNU",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-3-GjWqEp21yz",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-4-jnbJr9C0w8",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-5-e8Wacz5yDO",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-6-9cW53j3Zcf",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-7-jk8z3m2Aa5",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-8-VU56KmMeiz",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-9-uvMdFxxDAj",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-10-FQyWfwJFbH",
> ...
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42680&view=logs&j=aa18c3f6-13b8-5f58-86bb-c1cffb239496&t=502fb6c0-30a2-5e49-c5c2-a00fa3acb203&l=37544



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] mobuchowski commented on pull request #15102: draft: [FLINK-21643] [connectors/jdbc] Provide DynamicJdbcStatementExecutor

2022-12-01 Thread GitBox


mobuchowski commented on PR #15102:
URL: https://github.com/apache/flink/pull/15102#issuecomment-1333794633

   @MartijnVisser thanks - I will spend some time over weekend to check if the 
code isn't too far from current main branch, and try to generate some interest 
on the mailing lists.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #20919: [FLINK-29405] Fix unstable test InputFormatCacheLoaderTest

2022-12-01 Thread GitBox


zentol commented on code in PR #20919:
URL: https://github.com/apache/flink/pull/20919#discussion_r1037128521


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/table/lookup/fullcache/inputformat/InputFormatCacheLoader.java:
##
@@ -107,7 +109,11 @@ protected void reloadCache() throws Exception {
 } catch (InterruptedException ignored) { // we use interrupt to close 
reload thread
 } finally {
 if (cacheLoadTaskService != null) {
+// if main cache reload thread encountered an exception,
+// it interrupts underlying InputSplitCacheLoadTasks threads
 cacheLoadTaskService.shutdownNow();

Review Comment:
   I'm not sure what the question is.
   
   We certainly shouldn't use the common fork pool. I can see what caused that 
decision to be made, but I'd say the wrong conclusion was drawn. reloadCache 
should've explictly been an async operation. That would also make it obvious 
when looking at the class where concurrency actually exists.
   
   If this operation is so heavy that you don't want to call it in the calling 
thread then it probably shouldn't run in the common pool.
   
   Not a huge fan of repeatedly creating a new executor service btw; I'd rather 
create it once for a consistent resource profile. As a side-effect it would 
then also be trivial to make this call async.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dannycranmer merged pull request #21370: [hotfix][docs] Add missing columns for Kinesis Data Streams Table API…

2022-12-01 Thread GitBox


dannycranmer merged PR #21370:
URL: https://github.com/apache/flink/pull/21370


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dannycranmer merged pull request #21427: [hotfix][docs] Add missing columns for Kinesis Data Streams Table API

2022-12-01 Thread GitBox


dannycranmer merged PR #21427:
URL: https://github.com/apache/flink/pull/21427


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dannycranmer commented on pull request #21426: [hotfix][docs] Add missing columns for Kinesis Data Streams Table API...

2022-12-01 Thread GitBox


dannycranmer commented on PR #21426:
URL: https://github.com/apache/flink/pull/21426#issuecomment-1333802674

   >Error: module "github.com/apache/flink-connector-elasticsearch/docs" not 
found; either add it as a Hugo Module or store it in 
"/home/vsts/work/1/s/docs/themes".: module does not exist
   Error building the docs
   ##[error]Bash exited with code '1'.
   
   Build failure not related


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dannycranmer merged pull request #21426: [hotfix][docs] Add missing columns for Kinesis Data Streams Table API...

2022-12-01 Thread GitBox


dannycranmer merged PR #21426:
URL: https://github.com/apache/flink/pull/21426


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] reswqa opened a new pull request, #21439: [BP-1.15][FLINK-27944] Input channel metric will no longer be registered multiple times

2022-12-01 Thread GitBox


reswqa opened a new pull request, #21439:
URL: https://github.com/apache/flink/pull/21439

   ## What is the purpose of the change
   
   *If a task has multiple input gates, It will create as many 
`InputChannelMetrics` as the number of gates, so the corresponding metrics are 
registered repeatedly. This pull request aimed to fix this problem.*
   
   
   ## Brief change log
   
 - *Create `InputChannelMetrics` in 
`NettyShuffleEnvironment#createInputGates`.*
   
   
   ## Verifying this change
   
   This change added unit test in `NettyShuffleEnvironmentTest`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >