[jira] [Commented] (FLINK-16956) Git fetch failed with exit code 128

2022-03-09 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503663#comment-17503663
 ] 

Matthias Pohl commented on FLINK-16956:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32719&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=61c73713-1b77-5132-1d22-4d746b4b06d8&l=5989

> Git fetch failed with exit code 128
> ---
>
> Key: FLINK-16956
> URL: https://issues.apache.org/jira/browse/FLINK-16956
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Piotr Nowojski
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7003&view=logs&j=16ccbdb7-2a3e-53da-36eb-fb718edc424a&t=5321d2cb-5c30-5320-9c0c-312babc023c8
> {noformat}
> fatal: could not read Username for 'https://github.com': terminal prompts 
> disabled
> ##[warning]Git fetch failed with exit code 128, back off 8.691 seconds before 
> retry.
> git -c http.extraheader="AUTHORIZATION: basic ***" fetch  --tags --prune 
> --progress --no-recurse-submodules origin
> fatal: could not read Username for 'https://github.com': terminal prompts 
> disabled
> ##[warning]Git fetch failed with exit code 128, back off 3.711 seconds before 
> retry.
> git -c http.extraheader="AUTHORIZATION: basic ***" fetch  --tags --prune 
> --progress --no-recurse-submodules origin
> fatal: could not read Username for 'https://github.com': terminal prompts 
> disabled
> ##[error]Git fetch failed with exit code: 128
> Finishing: Checkout flink-ci/flink-mirror@master to s
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zentol commented on a change in pull request #18979: [FLINK-26484][fs] FileSystem.delete is not implemented consistently

2022-03-09 Thread GitBox


zentol commented on a change in pull request #18979:
URL: https://github.com/apache/flink/pull/18979#discussion_r822773571



##
File path: 
flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/flink/fs/s3presto/FlinkS3PrestoFileSystem.java
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.fs.s3presto;
+
+import org.apache.flink.core.fs.FileStatus;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.fs.s3.common.FlinkS3FileSystem;
+import org.apache.flink.fs.s3.common.writer.S3AccessHelper;
+import org.apache.flink.util.ExceptionUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.hadoop.fs.FileSystem;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.Optional;
+
+/**
+ * {@code FlinkS3PrestoFileSystem} provides custom recursive deletion 
functionality to work around a
+ * bug in the internally used Presto file system.
+ *
+ * https://github.com/prestodb/presto/issues/17416
+ */
+public class FlinkS3PrestoFileSystem extends FlinkS3FileSystem {
+
+public FlinkS3PrestoFileSystem(
+FileSystem hadoopS3FileSystem,
+String localTmpDirectory,
+@Nullable String entropyInjectionKey,
+int entropyLength,
+@Nullable S3AccessHelper s3UploadHelper,
+long s3uploadPartSize,
+int maxConcurrentUploadsPerStream) {
+super(
+hadoopS3FileSystem,
+localTmpDirectory,
+entropyInjectionKey,
+entropyLength,
+s3UploadHelper,
+s3uploadPartSize,
+maxConcurrentUploadsPerStream);
+}
+
+@Override
+public boolean delete(Path path, boolean recursive) throws IOException {
+if (recursive) {
+deleteRecursively(path);
+} else {
+deleteObject(path);
+}
+
+return true;
+}
+
+private void deleteRecursively(Path path) throws IOException {
+final FileStatus[] containingFiles =
+Preconditions.checkNotNull(
+listStatus(path),
+"Hadoop FileSystem.listStatus should never return null 
based on its contract.");
+
+if (containingFiles.length == 0) {
+deleteObject(path);
+return;
+}

Review comment:
   I'm again wondering why we do this.
   
   Under what circumstance can an empty directory exist when using the presto 
filesystem? Mkdirs can't cause it, and deleting the contents is supposed to 
cleanup the directory.
   (Is this actually something that happens automatically, or is that just 
caused by the FS implementation which also explicitly issues a delete call for 
the directory).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp merged pull request #19016: [BP-1.13][FLINK-26352][runtime-web] Add missing license headers to WebUI source files

2022-03-09 Thread GitBox


XComp merged pull request #19016:
URL: https://github.com/apache/flink/pull/19016


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp commented on a change in pull request #18979: [FLINK-26484][fs] FileSystem.delete is not implemented consistently

2022-03-09 Thread GitBox


XComp commented on a change in pull request #18979:
URL: https://github.com/apache/flink/pull/18979#discussion_r822776827



##
File path: 
flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/flink/fs/s3presto/FlinkS3PrestoFileSystem.java
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.fs.s3presto;
+
+import org.apache.flink.core.fs.FileStatus;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.fs.s3.common.FlinkS3FileSystem;
+import org.apache.flink.fs.s3.common.writer.S3AccessHelper;
+import org.apache.flink.util.ExceptionUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.hadoop.fs.FileSystem;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.Optional;
+
+/**
+ * {@code FlinkS3PrestoFileSystem} provides custom recursive deletion 
functionality to work around a
+ * bug in the internally used Presto file system.
+ *
+ * https://github.com/prestodb/presto/issues/17416
+ */
+public class FlinkS3PrestoFileSystem extends FlinkS3FileSystem {
+
+public FlinkS3PrestoFileSystem(
+FileSystem hadoopS3FileSystem,
+String localTmpDirectory,
+@Nullable String entropyInjectionKey,
+int entropyLength,
+@Nullable S3AccessHelper s3UploadHelper,
+long s3uploadPartSize,
+int maxConcurrentUploadsPerStream) {
+super(
+hadoopS3FileSystem,
+localTmpDirectory,
+entropyInjectionKey,
+entropyLength,
+s3UploadHelper,
+s3uploadPartSize,
+maxConcurrentUploadsPerStream);
+}
+
+@Override
+public boolean delete(Path path, boolean recursive) throws IOException {
+if (recursive) {
+deleteRecursively(path);
+} else {
+deleteObject(path);
+}
+
+return true;
+}
+
+private void deleteRecursively(Path path) throws IOException {
+final FileStatus[] containingFiles =
+Preconditions.checkNotNull(
+listStatus(path),
+"Hadoop FileSystem.listStatus should never return null 
based on its contract.");
+
+if (containingFiles.length == 0) {
+deleteObject(path);
+return;
+}

Review comment:
   the path could still refer to an object instead of a directory




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp commented on a change in pull request #18979: [FLINK-26484][fs] FileSystem.delete is not implemented consistently

2022-03-09 Thread GitBox


XComp commented on a change in pull request #18979:
URL: https://github.com/apache/flink/pull/18979#discussion_r822776827



##
File path: 
flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/flink/fs/s3presto/FlinkS3PrestoFileSystem.java
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.fs.s3presto;
+
+import org.apache.flink.core.fs.FileStatus;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.fs.s3.common.FlinkS3FileSystem;
+import org.apache.flink.fs.s3.common.writer.S3AccessHelper;
+import org.apache.flink.util.ExceptionUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.hadoop.fs.FileSystem;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.Optional;
+
+/**
+ * {@code FlinkS3PrestoFileSystem} provides custom recursive deletion 
functionality to work around a
+ * bug in the internally used Presto file system.
+ *
+ * https://github.com/prestodb/presto/issues/17416
+ */
+public class FlinkS3PrestoFileSystem extends FlinkS3FileSystem {
+
+public FlinkS3PrestoFileSystem(
+FileSystem hadoopS3FileSystem,
+String localTmpDirectory,
+@Nullable String entropyInjectionKey,
+int entropyLength,
+@Nullable S3AccessHelper s3UploadHelper,
+long s3uploadPartSize,
+int maxConcurrentUploadsPerStream) {
+super(
+hadoopS3FileSystem,
+localTmpDirectory,
+entropyInjectionKey,
+entropyLength,
+s3UploadHelper,
+s3uploadPartSize,
+maxConcurrentUploadsPerStream);
+}
+
+@Override
+public boolean delete(Path path, boolean recursive) throws IOException {
+if (recursive) {
+deleteRecursively(path);
+} else {
+deleteObject(path);
+}
+
+return true;
+}
+
+private void deleteRecursively(Path path) throws IOException {
+final FileStatus[] containingFiles =
+Preconditions.checkNotNull(
+listStatus(path),
+"Hadoop FileSystem.listStatus should never return null 
based on its contract.");
+
+if (containingFiles.length == 0) {
+deleteObject(path);
+return;
+}

Review comment:
   the path could still refer to a object instead of a directory




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 4fa3790a48a8f3e67e854d8579d830f5f5c3f6a2 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32757)
 
   * c3e816391654a6c4230414be93683ba0d7932f73 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32761)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #18979: [FLINK-26484][fs] FileSystem.delete is not implemented consistently

2022-03-09 Thread GitBox


zentol commented on a change in pull request #18979:
URL: https://github.com/apache/flink/pull/18979#discussion_r822773571



##
File path: 
flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/flink/fs/s3presto/FlinkS3PrestoFileSystem.java
##
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.fs.s3presto;
+
+import org.apache.flink.core.fs.FileStatus;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.fs.s3.common.FlinkS3FileSystem;
+import org.apache.flink.fs.s3.common.writer.S3AccessHelper;
+import org.apache.flink.util.ExceptionUtils;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.hadoop.fs.FileSystem;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.Optional;
+
+/**
+ * {@code FlinkS3PrestoFileSystem} provides custom recursive deletion 
functionality to work around a
+ * bug in the internally used Presto file system.
+ *
+ * https://github.com/prestodb/presto/issues/17416
+ */
+public class FlinkS3PrestoFileSystem extends FlinkS3FileSystem {
+
+public FlinkS3PrestoFileSystem(
+FileSystem hadoopS3FileSystem,
+String localTmpDirectory,
+@Nullable String entropyInjectionKey,
+int entropyLength,
+@Nullable S3AccessHelper s3UploadHelper,
+long s3uploadPartSize,
+int maxConcurrentUploadsPerStream) {
+super(
+hadoopS3FileSystem,
+localTmpDirectory,
+entropyInjectionKey,
+entropyLength,
+s3UploadHelper,
+s3uploadPartSize,
+maxConcurrentUploadsPerStream);
+}
+
+@Override
+public boolean delete(Path path, boolean recursive) throws IOException {
+if (recursive) {
+deleteRecursively(path);
+} else {
+deleteObject(path);
+}
+
+return true;
+}
+
+private void deleteRecursively(Path path) throws IOException {
+final FileStatus[] containingFiles =
+Preconditions.checkNotNull(
+listStatus(path),
+"Hadoop FileSystem.listStatus should never return null 
based on its contract.");
+
+if (containingFiles.length == 0) {
+deleteObject(path);
+return;
+}

Review comment:
   I'm again wondering why we do this.
   
   Under what circumstance can an empty directory exist when using the presto 
filesystem? Mkdirs can't cause it, and deleting the contents is supposed to 
cleanup the directory.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #18987: [FLINK-26494][runtime] Cleanup does not reveal exceptions

2022-03-09 Thread GitBox


zentol commented on a change in pull request #18987:
URL: https://github.com/apache/flink/pull/18987#discussion_r822768800



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/cleanup/DefaultResourceCleaner.java
##
@@ -146,14 +146,15 @@ private Builder(
  */
 public Builder withRegularCleanup(String label, T regularCleanup) {
 assertUniqueLabel(label);
-this.regularCleanup.put(label, regularCleanup);
+this.regularCleanup.add(new CleanupWithLabel<>(regularCleanup, 
label));
 return this;
 }
 
 private void assertUniqueLabel(String label) {
 Preconditions.checkArgument(
-!this.prioritizedCleanup.containsKey(label)
-&& !this.regularCleanup.containsKey(label),
+Stream.concat(prioritizedCleanup.stream(), 
regularCleanup.stream())
+.map(CleanupWithLabel::getLabel)
+.noneMatch(label::equals),

Review comment:
   Why should they still be required to be unique?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 4fa3790a48a8f3e67e854d8579d830f5f5c3f6a2 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32757)
 
   * c3e816391654a6c4230414be93683ba0d7932f73 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #18987: [FLINK-26494][runtime] Cleanup does not reveal exceptions

2022-03-09 Thread GitBox


zentol commented on a change in pull request #18987:
URL: https://github.com/apache/flink/pull/18987#discussion_r822766550



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/FileSystemJobResultStore.java
##
@@ -165,8 +171,16 @@ public boolean hasCleanJobResultEntryInternal(JobID jobId) 
throws IOException {
 
 @Override
 public Set getDirtyResultsInternal() throws IOException {
+final FileStatus[] statuses = fileSystem.listStatus(this.basePath);
+if (statuses == null) {
+LOG.warn(
+"The JobResultStore directory '"
++ basePath
++ "' was deleted. No persisted JobResults could be 
recovered.");
+return Collections.emptySet();

Review comment:
   It's either or, isn't it. Either we continue as usual and don't log a 
warning (because it's not really difficult to recover from it) or we log an 
error and fail. If the FS is plain broken and doesn't support what we want it 
to do then there's no way out for the user. If it's just some haywire process 
deleting the directory, then again the user can't do anything but just restart 
Flink.
   
   I do think that failing would be unnecessary though. It seems that we only 
consider this as an invalid state because we eagerly create the directory in 
the constructor, but if we'd only create that when actually writing a result 
this whole issue would disappear.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 4fa3790a48a8f3e67e854d8579d830f5f5c3f6a2 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32757)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 4fa3790a48a8f3e67e854d8579d830f5f5c3f6a2 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32757)
 
   * c3e816391654a6c4230414be93683ba0d7932f73 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 4fa3790a48a8f3e67e854d8579d830f5f5c3f6a2 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32757)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18979: [FLINK-26484][fs] FileSystem.delete is not implemented consistently

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18979:
URL: https://github.com/apache/flink/pull/18979#issuecomment-1059179598


   
   ## CI report:
   
   * c033b36aaec64f6e45d9413a67862d0d5f6248a3 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32695)
 
   * a63e2905ec9988b684b729f62a6c66791e776c75 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32760)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-26528) Trigger the updateControl when the FlinkDeployment have changed

2022-03-09 Thread Aitozi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503631#comment-17503631
 ] 

Aitozi edited comment on FLINK-26528 at 3/9/22, 3:07 PM:
-

Yes, my meaning is to eliminate the unchanged update towards kubernetes. IMO, 
The {{UpdateControl}} have four abilities
 * update status
 * update resource
 * not update not reschedule
 * not update but reschedule

So if we have change the status or resource (not recommended and not used now), 
we should apply the update. If the status have not achieved the desired state, 
and is not changed since last reconcile loop, we should just reschedule and 
wait the next turn.


was (Author: aitozi):
Yes, my meaning is to eliminate the unchanged update to toward kubernetes. IMO, 
The {{UpdateControl}} have four abilities
 * update status
 * update resource
 * not update not reschedule
 * not update but reschedule

So if we have change the status or resource (not recommended and not used now), 
we should apply the update. If the status have not achieved the desired state, 
and is not changed since last reconcile loop, we should just reschedule and 
wait the next turn.

> Trigger the updateControl when the FlinkDeployment have changed
> ---
>
> Key: FLINK-26528
> URL: https://issues.apache.org/jira/browse/FLINK-26528
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Aitozi
>Priority: Major
>
> If the CR has not changed since last reconcile, we could create a 
> UpdateControl with {{UpdateControl#noUpdate}} , this is meant to reduce the 
> unnecessary update for resource 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18979: [FLINK-26484][fs] FileSystem.delete is not implemented consistently

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18979:
URL: https://github.com/apache/flink/pull/18979#issuecomment-1059179598


   
   ## CI report:
   
   * c033b36aaec64f6e45d9413a67862d0d5f6248a3 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32695)
 
   * a63e2905ec9988b684b729f62a6c66791e776c75 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 4fa3790a48a8f3e67e854d8579d830f5f5c3f6a2 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32757)
 
   * c3e816391654a6c4230414be93683ba0d7932f73 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25213) Add @Public annotations to flink-table-common

2022-03-09 Thread RocMarshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503638#comment-17503638
 ] 

RocMarshal commented on FLINK-25213:


Hi, [~monster#12]   Is there any progress to report from your end?

> Add @Public annotations to flink-table-common
> -
>
> Key: FLINK-25213
> URL: https://issues.apache.org/jira/browse/FLINK-25213
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.15.0
>Reporter: ZhuoYu Chen
>Assignee: ZhuoYu Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zentol commented on a change in pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


zentol commented on a change in pull request #18905:
URL: https://github.com/apache/flink/pull/18905#discussion_r822743640



##
File path: flink-connectors/flink-sql-connector-kafka/pom.xml
##
@@ -40,6 +40,12 @@ under the License.
org.apache.flink
flink-connector-kafka
${project.version}
+   
+   
+   org.apache.flink
+   
flink-connector-base

Review comment:
   I guess what could happen is that some other dependency of 
connector-kafka may depend on connector-base. However I can't think of a 
scenario where we actually want that to result in it being bundled in 
downstream projects; as is that'd imply that if users bundle the kafka 
connector they'd also bundle connector-base twice.
   
   so TL;DR: That you have to add this exclusion to me implies a larger issue.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp commented on pull request #18979: [FLINK-26484][fs] FileSystem.delete is not implemented consistently

2022-03-09 Thread GitBox


XComp commented on pull request #18979:
URL: https://github.com/apache/flink/pull/18979#issuecomment-1063012051


   Thanks for your review, @zentol . I addressed your comments. PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19002: [FLINK-26501] extra log

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19002:
URL: https://github.com/apache/flink/pull/19002#issuecomment-1060970183


   
   ## CI report:
   
   * cbdee82a50ffe1a7c3fdc3b708bafd6258229afc Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32672)
 
   * f9d602dafaee869dddcf952fdf7b91fdc28c637d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32759)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18940: [FLINK-26249][table-planner] Run BuiltInFunctionsTestBase and BuiltInAggregateFunctionsTestBase in parallel

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18940:
URL: https://github.com/apache/flink/pull/18940#issuecomment-1055381485


   
   ## CI report:
   
   * b963e2a9e1fc86e06f9149cbaa85e1b9d9d8614d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32380)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 4fa3790a48a8f3e67e854d8579d830f5f5c3f6a2 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32757)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19002: [FLINK-26501] extra log

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19002:
URL: https://github.com/apache/flink/pull/19002#issuecomment-1060970183


   
   ## CI report:
   
   * cbdee82a50ffe1a7c3fdc3b708bafd6258229afc Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32672)
 
   * f9d602dafaee869dddcf952fdf7b91fdc28c637d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 4fa3790a48a8f3e67e854d8579d830f5f5c3f6a2 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32757)
 
   * 5620218a79c73d22b1af1cb5f6099131a65032a7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26528) Trigger the updateControl when the FlinkDeployment have changed

2022-03-09 Thread Aitozi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503631#comment-17503631
 ] 

Aitozi commented on FLINK-26528:


Yes, my meaning is to eliminate the unchanged update to toward kubernetes. IMO, 
The {{UpdateControl}} have four abilities
 * update status
 * update resource
 * not update not reschedule
 * not update but reschedule

So if we have change the status or resource (not recommended and not used now), 
we should apply the update. If the status have not achieved the desired state, 
and is not changed since last reconcile loop, we should just reschedule and 
wait the next turn.

> Trigger the updateControl when the FlinkDeployment have changed
> ---
>
> Key: FLINK-26528
> URL: https://issues.apache.org/jira/browse/FLINK-26528
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Aitozi
>Priority: Major
>
> If the CR has not changed since last reconcile, we could create a 
> UpdateControl with {{UpdateControl#noUpdate}} , this is meant to reduce the 
> unnecessary update for resource 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 4fa3790a48a8f3e67e854d8579d830f5f5c3f6a2 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32757)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26553) Enable scalafmt for scala codebase

2022-03-09 Thread Francesco Guardiani (Jira)
Francesco Guardiani created FLINK-26553:
---

 Summary: Enable scalafmt for scala codebase
 Key: FLINK-26553
 URL: https://issues.apache.org/jira/browse/FLINK-26553
 Project: Flink
  Issue Type: Bug
  Components: Build System
Reporter: Francesco Guardiani
Assignee: Francesco Guardiani


As discussed in 
https://lists.apache.org/thread/97398pc9cb8y922xlb6mzlsbjtjf5jnv, we should 
enable scalafmt in our codebase



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25235) Re-enable ZooKeeperLeaderElectionITCase#testJobExecutionOnClusterWithLeaderChange

2022-03-09 Thread Niklas Semmler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503628#comment-17503628
 ] 

Niklas Semmler commented on FLINK-25235:


After staring at this problem for some time and playing through different 
solutions (that usually stopped at different tests), I want to document the 
concepts and issues at play here. 

*Background*
- We recently changed the implementation of the leader election from a separate 
{{LeaderElectionDriver}} per JobManager component (e.g., dispatcher, rest 
server, etc.) to a combined {{LeaderElectionDriver}} for the JobManager as a 
whole. (For Zoo Keeper based leader election we previously used a 
{{ZooKeeperLeaderElectionDriver}}. Now we use 
{{MultipleComponentLeaderElectionDriverAdapter}} which de-multiplexes the 
leader election for all JobManager components via a single 
{{MultipleComponentLeaderElectionService}} to a single 
{{ZooKeeperMultipleComponentLeaderElectionDriver}} underneath.) This changed 
the mapping between {{HighAvailabilityServices}} and {{LeaderElectionDriver}} 
from a one-to-many to a one-to-one relationship.
- {{HighAvailabilityServices}} is the class responsible for high availability 
(e.g., leader fail-over). However, even though JobManager and {{TaskManager}} 
have a dependecy on this class, not all scenarios require high availability. 
The implementation {{AbstractNonHaServices}} and its {{EmeddedHaServices}} 
(single JVM setup) and {{StandaloneHaServices}} (no support for JobManager 
failures) are used for these scenarios.
- {{MiniCluster}} is the class responsible for managing a single-node Flink 
cluster. It contains a single {{HighAvailabilityServices}} object that is 
shared by the JobManager and multiple {{TaskManager}}.
- {{TestingMiniCluster}} extends the {{MiniCluster}} for test purposes. Among 
others, it is used for tests on leader election between multiple JobManager. 
Currently, the  implementation re-uses the same {{HighAvailabilityServices}} 
object for multiple JobManager.

*Issues with MiniCluster*
- The {{MiniCluster}} is meant to "to execute Flink jobs locally". To me this 
means that the {{MiniCluster}} _should_ only use {{EmbeddedHaServices}} as it 
does not need high availability on a single JVM. However, it does not put any 
constraints on the type of high availability services used. Furthermore, the 
method {{MiniCluster#useLocalCommunication}} makes it sound, as if the 
MiniCluster can also be used for non-local communication. 
- Although the {{MiniCluster}} has the subclass {{TestingMiniCluster}} meant 
for testing, it does contain code that is meant for testing. E.g., 
{{MiniCluster#getHaLeadershipControl}} is used to give a test explicit control 
over the leader election of an {{EmbeddedHaService}}.
- Even though, the {{MiniCluster#createDispatcherResourceManagerComponents}} 
offers a tie-in for the creation of multiple JobManagers, {{MiniCluster}} does 
not offer the same freedom in stopping services. The 
{{MiniCluster#terminateMiniClusterServices}} assumes that only one instance of 
{{HighlyAvailableServices}} and other services exist. 

*Issues with TestingMiniCluster*
- The {{TestingMiniCluster}} is used by tests that need multiple JobManager 
with separate leader election (e.g., ZooKeeperLeaderElectionITCase) and some 
that require that all parts including the {{TaskExecutor}} share the same 
{{HighAvailabilityServices}} (e.g., JobExecutionITCase).
- {{TestingMiniCluster}} has two separate tie-ins into the creation of the 
JobManagers. First, it overrides the method used for the creation: 
{{MiniCluster#createDispatcherResourceManagerComponentFactory}}. Second, it 
allows overriding the factory used by the method: 
{{MiniCluster#createDispatcherResourceManagerComponentFactory}}. This 
redundancy makes the code hard to understand.
- {{TestingMiniCluster}} allows overriding the method 
{{MiniCluster#createHighAvailabilityServices}} for creating a 
{{HighAvailabilityServices}}. However, that method already has a two step 
process of creating the {{HighAvailabilityServices}}. The existing process even 
includes the option of using a custom factory. Again, this redundancy makes the 
code hard to understand.

*Conclusion*
- The {{MiniCluster}} and {{TestingMiniCluster}} cover a large variety of 
different use cases. This makes the classes hard to understand. Splitting these 
classes into one per use case would greatly improve the readability.
- The reason for using inheritance is not clear to me. First, the interaction 
between the subclass and parent class is hard to follow. Second, I don't see in 
what scenarios you would want to replace an instance of {{MiniCluster}} with 
{{TestingMiniCluster}}, so there should be no need to implement the same 
methods. (Could be wrong on this though.) Third, having a single non-abstract 
parent class with a single subclass on its own sounds like a degenerate

[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * d22ead8681c8c8797eec9b8e2c561f4bb2625fa1 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32749)
 
   * 4fa3790a48a8f3e67e854d8579d830f5f5c3f6a2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32757)
 
   * 5620218a79c73d22b1af1cb5f6099131a65032a7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19024: [FLINK-26349][AvroParquet][test] add UT for reading reflect records from parquet file created with generic record schema.

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19024:
URL: https://github.com/apache/flink/pull/19024#issuecomment-1062971456


   
   ## CI report:
   
   * 48d10e719d0f3030060a0658debdf74f06382e30 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32758)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19003: [FLINK-26517][runtime] Normalize the decided parallelism to power of 2 when using adaptive batch scheduler

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19003:
URL: https://github.com/apache/flink/pull/19003#issuecomment-1061377385


   
   ## CI report:
   
   * e16d9fedfcda6856437a942d7e2cf409b3e0d820 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32737)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * d22ead8681c8c8797eec9b8e2c561f4bb2625fa1 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32749)
 
   * 4fa3790a48a8f3e67e854d8579d830f5f5c3f6a2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32757)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on a change in pull request #18980: [FLINK-26421] Use only EnvironmentSettings to configure the environment

2022-03-09 Thread GitBox


twalthr commented on a change in pull request #18980:
URL: https://github.com/apache/flink/pull/18980#discussion_r822701669



##
File path: 
flink-table/flink-table-api-scala-bridge/src/main/scala/org/apache/flink/table/api/bridge/scala/internal/StreamTableEnvironmentImpl.scala
##
@@ -292,6 +294,12 @@ class StreamTableEnvironmentImpl (
 
 object StreamTableEnvironmentImpl {
 
+  def create(
+  executionEnvironment: StreamExecutionEnvironment,
+  settings: EnvironmentSettings): StreamTableEnvironmentImpl =
+create(executionEnvironment, settings, TableConfig.getDefault)
+
+  @VisibleForTesting

Review comment:
   Let's rework the test then. I don't see a reason why we should make our 
internal methods complicated just for a wrongly implemented test.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26537) Allow disabling StatefulFunctionsConfigValidator validation for classloader.parent-first-patterns.additional

2022-03-09 Thread Igal Shilman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503610#comment-17503610
 ] 

Igal Shilman commented on FLINK-26537:
--

Hello [~Fil Karnicki], 

How about we make a parameter that basically says that this is an embedded run, 
and we disable the whatever validations that make sense?

> Allow disabling StatefulFunctionsConfigValidator validation for 
> classloader.parent-first-patterns.additional
> 
>
> Key: FLINK-26537
> URL: https://issues.apache.org/jira/browse/FLINK-26537
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Fil Karnicki
>Priority: Major
>
> For some deployments of stateful functions onto existing, shared clusters, it 
> is impossible to tailor which classes exist on the classpath. An example 
> would be a Cloudera Flink cluster, which adds protobuf-java classes that 
> clash with statefun ones.
> Stateful functions require the following flink-config.yaml setting:
> {{classloader.parent-first-patterns.additional: 
> org.apache.flink.statefun;org.apache.kafka;{+}*com.google.protobuf*{+}}} 
> In the case of the cloudera flink cluster, this will cause old, 2.5.0 
> protobuf classes to be loaded by statefun, which causes MethodNotFound 
> exceptions. 
> The idea is to allow disabling of the validation below, if some config 
> setting is present in the global flink configuration, for example: 
> statefun.validation.parent-first-classloader.disable=true
>  
> {code:java}
> private static void validateParentFirstClassloaderPatterns(Configuration 
> configuration) {
>   final Set parentFirstClassloaderPatterns =
>   parentFirstClassloaderPatterns(configuration);
>   if 
> (!parentFirstClassloaderPatterns.containsAll(PARENT_FIRST_CLASSLOADER_PATTERNS))
>  {
> throw new StatefulFunctionsInvalidConfigException(
> CoreOptions.ALWAYS_PARENT_FIRST_LOADER_PATTERNS_ADDITIONAL,
> "Must contain all of " + String.join(", ", 
> PARENT_FIRST_CLASSLOADER_PATTERNS));
>   }
> } {code}
>  
> Then, we wouldn't need to contain com.google.protobuf in 
> {{classloader.parent-first-patterns.additional:}} and it would be up to the 
> statefun user to use protobuf classes which are compatible with their version 
> of statefun.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #19024: [FLINK-26349][AvroParquet][test] add UT for reading reflect records from parquet file created with generic record schema.

2022-03-09 Thread GitBox


flinkbot commented on pull request #19024:
URL: https://github.com/apache/flink/pull/19024#issuecomment-1062971456


   
   ## CI report:
   
   * 48d10e719d0f3030060a0658debdf74f06382e30 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19022: [FLINK-26550][checkpoint] Correct the information of checkpoint failure

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19022:
URL: https://github.com/apache/flink/pull/19022#issuecomment-1062855606


   
   ## CI report:
   
   * 84a227bab9edfc98a6ee7318377e609d0c0ac164 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32751)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 46f5d666a22e3a489b1dfce4506a778c49485f12 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32731)
 
   * d22ead8681c8c8797eec9b8e2c561f4bb2625fa1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32749)
 
   * 4fa3790a48a8f3e67e854d8579d830f5f5c3f6a2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp commented on a change in pull request #18979: [FLINK-26484][fs] FileSystem.delete is not implemented consistently

2022-03-09 Thread GitBox


XComp commented on a change in pull request #18979:
URL: https://github.com/apache/flink/pull/18979#discussion_r822695228



##
File path: 
flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/flink/fs/s3presto/FlinkS3PrestoFileSystem.java
##
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.fs.s3presto;
+
+import org.apache.flink.core.fs.FileStatus;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.fs.s3.common.FlinkS3FileSystem;
+import org.apache.flink.fs.s3.common.writer.S3AccessHelper;
+import org.apache.flink.util.ExceptionUtils;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.nio.file.DirectoryNotEmptyException;
+import java.util.Optional;
+
+/**
+ * {@code FlinkS3PrestoFileSystem} provides custom recursive deletion 
functionality to work around a
+ * bug in the internally used Presto file system.
+ *
+ * https://github.com/prestodb/presto/issues/17416
+ */
+public class FlinkS3PrestoFileSystem extends FlinkS3FileSystem {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(FlinkS3PrestoFileSystem.class);
+
+/**
+ * Creates a {@code FlinkS3PrestoFileSystem} based on the given Hadoop S3 
file system. The given
+ * Hadoop file system object is expected to be initialized already.
+ *
+ * This constructor additionally configures the entropy injection for 
the file system.
+ *
+ * @param hadoopS3FileSystem The Hadoop FileSystem that will be used under 
the hood.
+ * @param entropyInjectionKey The substring that will be replaced by 
entropy or removed.
+ * @param entropyLength The number of random alphanumeric characters to 
inject as entropy.
+ */
+public FlinkS3PrestoFileSystem(
+FileSystem hadoopS3FileSystem,
+String localTmpDirectory,
+@Nullable String entropyInjectionKey,
+int entropyLength,
+@Nullable S3AccessHelper s3UploadHelper,
+long s3uploadPartSize,
+int maxConcurrentUploadsPerStream) {
+super(
+hadoopS3FileSystem,
+localTmpDirectory,
+entropyInjectionKey,
+entropyLength,
+s3UploadHelper,
+s3uploadPartSize,
+maxConcurrentUploadsPerStream);
+}
+
+@Override
+public boolean delete(Path path, boolean recursive) throws IOException {
+if (recursive) {
+deleteRecursively(path);
+} else {
+if (isDirectoryWithContent(path)) {
+throw new DirectoryNotEmptyException(path.getPath());
+}
+
+deleteObject(path);
+}
+
+return true;
+}
+
+private void deleteRecursively(Path path) throws IOException {
+final FileStatus[] containingFiles = listStatus(path);
+if (containingFiles == null) {
+LOG.warn(
+"No files could be retrieved even though the path was 
marked as existing. "
++ "This is an indication for a bug in the 
underlying FileSystem "
++ "implementation and should be reported. It won't 
affect the "
++ "processing of this run, though.");
+return;
+}
+
+IOException exception = null;
+for (FileStatus fileStatus : containingFiles) {
+final Path childPath = fileStatus.getPath();
+
+try {
+if (fileStatus.isDir()) {
+deleteRecursively(childPath);
+} else {
+deleteObject(childPath);
+}
+} catch (IOException e) {
+exception = ExceptionUtils.firstOrSuppressed(e, exception);
+}
+}
+
+if (exception != null) {
+throw exception;
+}
+
+if (containingFiles.length == 0) {
+// Presto doesn't hold placeholders for directories itself; 
therefore, we don't need to
+// call deleteObject on the direct

[GitHub] [flink] twalthr commented on a change in pull request #19014: [FLINK-24586][table-planner] JSON_VALUE should return STRING instead of VARCHAR(2000)

2022-03-09 Thread GitBox


twalthr commented on a change in pull request #19014:
URL: https://github.com/apache/flink/pull/19014#discussion_r822692539



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/sql/SqlJsonValueFunctionWrapper.java
##
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.functions.sql;
+
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.sql.SqlJsonValueReturning;
+import org.apache.calcite.sql.SqlOperatorBinding;
+import org.apache.calcite.sql.fun.SqlJsonValueFunction;
+import org.apache.calcite.sql.type.ReturnTypes;
+import org.apache.calcite.sql.type.SqlReturnTypeInference;
+import org.apache.calcite.sql.type.SqlTypeTransforms;
+
+import java.util.Optional;
+
+import static 
org.apache.flink.table.planner.plan.type.FlinkReturnTypes.VARCHAR_FORCE_NULLABLE;
+
+/**
+ * This class is a wrapper class for the {@link SqlJsonValueFunction} but 
using the {@code
+ * VARCHAR_FORCE_NULLABLE} return type inference by default. It also supports 
specifying return type
+ * with the RETURNING keyword just like the original {@link 
SqlJsonValueFunction}.
+ */
+public class SqlJsonValueFunctionWrapper extends SqlJsonValueFunction {

Review comment:
   Make this class default scoped to not spam the classpath.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp commented on a change in pull request #18979: [FLINK-26484][fs] FileSystem.delete is not implemented consistently

2022-03-09 Thread GitBox


XComp commented on a change in pull request #18979:
URL: https://github.com/apache/flink/pull/18979#discussion_r822694090



##
File path: 
flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/flink/fs/s3presto/FlinkS3PrestoFileSystem.java
##
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.fs.s3presto;
+
+import org.apache.flink.core.fs.FileStatus;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.fs.s3.common.FlinkS3FileSystem;
+import org.apache.flink.fs.s3.common.writer.S3AccessHelper;
+import org.apache.flink.util.ExceptionUtils;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.nio.file.DirectoryNotEmptyException;
+import java.util.Optional;
+
+/**
+ * {@code FlinkS3PrestoFileSystem} provides custom recursive deletion 
functionality to work around a
+ * bug in the internally used Presto file system.
+ *
+ * https://github.com/prestodb/presto/issues/17416
+ */
+public class FlinkS3PrestoFileSystem extends FlinkS3FileSystem {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(FlinkS3PrestoFileSystem.class);
+
+/**
+ * Creates a {@code FlinkS3PrestoFileSystem} based on the given Hadoop S3 
file system. The given
+ * Hadoop file system object is expected to be initialized already.
+ *
+ * This constructor additionally configures the entropy injection for 
the file system.
+ *
+ * @param hadoopS3FileSystem The Hadoop FileSystem that will be used under 
the hood.
+ * @param entropyInjectionKey The substring that will be replaced by 
entropy or removed.
+ * @param entropyLength The number of random alphanumeric characters to 
inject as entropy.
+ */
+public FlinkS3PrestoFileSystem(
+FileSystem hadoopS3FileSystem,
+String localTmpDirectory,
+@Nullable String entropyInjectionKey,
+int entropyLength,
+@Nullable S3AccessHelper s3UploadHelper,
+long s3uploadPartSize,
+int maxConcurrentUploadsPerStream) {
+super(
+hadoopS3FileSystem,
+localTmpDirectory,
+entropyInjectionKey,
+entropyLength,
+s3UploadHelper,
+s3uploadPartSize,
+maxConcurrentUploadsPerStream);
+}
+
+@Override
+public boolean delete(Path path, boolean recursive) throws IOException {
+if (recursive) {
+deleteRecursively(path);
+} else {
+if (isDirectoryWithContent(path)) {
+throw new DirectoryNotEmptyException(path.getPath());
+}
+
+deleteObject(path);
+}
+
+return true;
+}
+
+private void deleteRecursively(Path path) throws IOException {
+final FileStatus[] containingFiles = listStatus(path);
+if (containingFiles == null) {
+LOG.warn(
+"No files could be retrieved even though the path was 
marked as existing. "
++ "This is an indication for a bug in the 
underlying FileSystem "
++ "implementation and should be reported. It won't 
affect the "
++ "processing of this run, though.");
+return;
+}
+
+IOException exception = null;
+for (FileStatus fileStatus : containingFiles) {
+final Path childPath = fileStatus.getPath();
+
+try {
+if (fileStatus.isDir()) {
+deleteRecursively(childPath);
+} else {
+deleteObject(childPath);
+}
+} catch (IOException e) {
+exception = ExceptionUtils.firstOrSuppressed(e, exception);
+}
+}
+
+if (exception != null) {
+throw exception;
+}
+
+if (containingFiles.length == 0) {
+// Presto doesn't hold placeholders for directories itself; 
therefore, we don't need to
+// call deleteObject on the direct

[GitHub] [flink] XComp commented on a change in pull request #18979: [FLINK-26484][fs] FileSystem.delete is not implemented consistently

2022-03-09 Thread GitBox


XComp commented on a change in pull request #18979:
URL: https://github.com/apache/flink/pull/18979#discussion_r822693325



##
File path: 
flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/flink/fs/s3presto/FlinkS3PrestoFileSystem.java
##
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.fs.s3presto;
+
+import org.apache.flink.core.fs.FileStatus;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.fs.s3.common.FlinkS3FileSystem;
+import org.apache.flink.fs.s3.common.writer.S3AccessHelper;
+import org.apache.flink.util.ExceptionUtils;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.nio.file.DirectoryNotEmptyException;
+import java.util.Optional;
+
+/**
+ * {@code FlinkS3PrestoFileSystem} provides custom recursive deletion 
functionality to work around a
+ * bug in the internally used Presto file system.
+ *
+ * https://github.com/prestodb/presto/issues/17416
+ */
+public class FlinkS3PrestoFileSystem extends FlinkS3FileSystem {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(FlinkS3PrestoFileSystem.class);
+
+/**
+ * Creates a {@code FlinkS3PrestoFileSystem} based on the given Hadoop S3 
file system. The given
+ * Hadoop file system object is expected to be initialized already.
+ *
+ * This constructor additionally configures the entropy injection for 
the file system.
+ *
+ * @param hadoopS3FileSystem The Hadoop FileSystem that will be used under 
the hood.
+ * @param entropyInjectionKey The substring that will be replaced by 
entropy or removed.
+ * @param entropyLength The number of random alphanumeric characters to 
inject as entropy.
+ */
+public FlinkS3PrestoFileSystem(
+FileSystem hadoopS3FileSystem,
+String localTmpDirectory,
+@Nullable String entropyInjectionKey,
+int entropyLength,
+@Nullable S3AccessHelper s3UploadHelper,
+long s3uploadPartSize,
+int maxConcurrentUploadsPerStream) {
+super(
+hadoopS3FileSystem,
+localTmpDirectory,
+entropyInjectionKey,
+entropyLength,
+s3UploadHelper,
+s3uploadPartSize,
+maxConcurrentUploadsPerStream);
+}
+
+@Override
+public boolean delete(Path path, boolean recursive) throws IOException {
+if (recursive) {
+deleteRecursively(path);
+} else {
+if (isDirectoryWithContent(path)) {
+throw new DirectoryNotEmptyException(path.getPath());
+}
+
+deleteObject(path);

Review comment:
   to cover the exception handling that is introduced in `deleteObject`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26349) AvroParquetReaders does not work with ReflectData

2022-03-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26349:
---
Labels: pull-request-available  (was: )

> AvroParquetReaders does not work with ReflectData
> -
>
> Key: FLINK-26349
> URL: https://issues.apache.org/jira/browse/FLINK-26349
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.15.0
>Reporter: Dawid Wysakowicz
>Assignee: Jing Ge
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> I tried to change the {{AvroParquetFileReadITCase}} to read the data as 
> {{ReflectData}} and I stumbled on a problem. The scenario is that I use exact 
> same code for writing parquet files, but changed the reading part to:
> {code}
> public static final class User {
> private final String name;
> private final Integer favoriteNumber;
> private final String favoriteColor;
> public User(String name, Integer favoriteNumber, String 
> favoriteColor) {
> this.name = name;
> this.favoriteNumber = favoriteNumber;
> this.favoriteColor = favoriteColor;
> }
> }
> final FileSource source =
> FileSource.forRecordStreamFormat(
> 
> AvroParquetReaders.forReflectRecord(User.class),
> 
> Path.fromLocalFile(TEMPORARY_FOLDER.getRoot()))
> .monitorContinuously(Duration.ofMillis(5))
> .build();
> {code}
> I get an error:
> {code}
> 819020 [flink-akka.actor.default-dispatcher-9] DEBUG 
> org.apache.flink.runtime.jobmaster.JobMaster [] - Archive local failure 
> causing attempt cc9f5e814ea9a3a5b397018dbffcb6a9 to fail: 
> com.esotericsoftware.kryo.KryoException: 
> java.lang.UnsupportedOperationException
> Serialization trace:
> reserved (org.apache.avro.Schema$Field)
> fieldMap (org.apache.avro.Schema$RecordSchema)
> schema (org.apache.avro.generic.GenericData$Record)
>   at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
>   at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
>   at 
> com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:143)
>   at 
> com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:21)
>   at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
>   at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>   at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
>   at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
>   at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:402)
>   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:191)
>   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:46)
>   at 
> org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:53)
>   at 
> org.apache.flink.runtime.io.network.api.serialization.NonSpanningWrapper.readInto(NonSpanningWrapper.java:337)
>   at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.readNonSpanningRecord(SpillingAdaptiveSpanningRecordDeserializer.java:128)
>   at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.readNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:103)
>   at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:93)
>   at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:95)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
> 

[GitHub] [flink] JingGe opened a new pull request #19024: [FLINK-26349][AvroParquet][test] add UT for reading reflect records from parquet file created with generic record schema.

2022-03-09 Thread GitBox


JingGe opened a new pull request #19024:
URL: https://github.com/apache/flink/pull/19024


   ## What is the purpose of the change
   
   There was an issue while chaning the AvroParquetFileReadITCase to read 
reflect data from parquet file created with generic record schema. The reason 
is that user schema used in test is defined only for Avro GenericRecord. In 
order to make it support reflect record read, a namespace is required, so that 
the program is able to find the class to do reflection.
   
   I have updated the user schema and add one UT to cover this case. Thanks for 
pointing it out.
   
   
   ## Brief change log
   
 - introduce a local User class 
 - add namespace into the user schema
 - add one UT to cover this scenario

   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
AvroParquetRecordFormatTest.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19021: [hotfix][docs] Clarify semantic of tolerable checkpoint failure number

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19021:
URL: https://github.com/apache/flink/pull/19021#issuecomment-1062847721


   
   ## CI report:
   
   * 3dd50180ec315c1375a79e0017928dd9e4d37b06 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32745)
 
   * 8a14a4d32b8a2f69426d0faa7fb1e7d18e809475 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32756)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26498) The window result may not have been emitted when use window emit feature and set allow-latency

2022-03-09 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503607#comment-17503607
 ] 

Jark Wu commented on FLINK-26498:
-

Yes, the "00:59:20.100" is cleaned up  before fired. 

> The window result may not have been  emitted when use window emit feature and 
> set allow-latency 
> 
>
> Key: FLINK-26498
> URL: https://issues.apache.org/jira/browse/FLINK-26498
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: hehuiyuan
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-03-05-23-53-37-086.png, 
> image-2022-03-05-23-53-44-196.png, image-2022-03-06-00-03-11-670.png
>
>
> the sql of job :
> {code:java}
> CREATE TABLE tableSource(
> name string,
> age int not null,
> sex string,
> dt TIMESTAMP(3),
> WATERMARK FOR dt AS dt - INTERVAL '0' SECOND
> ) WITH (
> );
> CREATE TABLE tableSink(
> windowstart timestamp(3),
> windowend timestamp(3),
> name string,
> age int,
> cou bigint
> )
> WITH (
> );
> INSERT INTO tablesink
>   SELECT
> TUMBLE_START(dt, INTERVAL '1' HOUR),
> TUMBLE_END(dt, INTERVAL '1' HOUR),
> name,
> age,
> count(sex)
> FROM tableSource
> GROUP BY TUMBLE(dt, INTERVAL '1' HOUR), name,age {code}
>  
> and table config:
> {code:java}
> table.exec.emit.allow-lateness = 1 hour 
> table.exec.emit.late-fire.delay = 1 min
> table.exec.emit.early-fire.delay = 1min{code}
>  
> The data:
> {code:java}
> >hehuiyuan1,22,woman,2022-03-05 00:30:22.000
> >hehuiyuan1,22,woman,2022-03-05 00:40:22.000
>  //pause ,wait for the window trigger for earlyTrigger 1 min
> >hehuiyuan1,22,woman,2022-03-05 00:50:22.000
> >hehuiyuan1,22,woman,2022-03-05 00:56:22.000
> //pause ,wait for the window trigger for earlyTrigger 1 min 
> >hehuiyuan1,22,woman,2022-03-05 01:00:00.000
> //pause ,wait for the window trigger for earlyTrigger 1 min 
> >hehuiyuan1,22,woman,2022-03-05 00:59:20.000 --latency data
> //pause ,wait for the window trigger for earlyTrigger 1 min 
> >hehuiyuan1,22,woman,2022-03-05 00:59:20.100 --latency data 
> >hehuiyuan1,22,woman,2022-03-05 02:00:00.000 -- window state clean for 
> >[0:00:00 1:00:00]
> >hehuiyuan1,22,woman,2022-03-05 02:10:00.000 {code}
>  
> The result:
> {code:java}
> > +I(+I[2022-03-05T00:00, 2022-03-05T01:00, hehuiyuan1, 22, 2])
> > -U(-U[2022-03-05T00:00, 2022-03-05T01:00, hehuiyuan1, 22, 2]) 
> > +U(+U[2022-03-05T00:00, 2022-03-05T01:00, hehuiyuan1, 22, 4]) 
> > +I(+I[2022-03-05T01:00, 2022-03-05T02:00, hehuiyuan1, 22, 1])
> > -U(-U[2022-03-05T00:00, 2022-03-05T01:00, hehuiyuan1, 22, 4]) 
> > +U(+U[2022-03-05T00:00, 2022-03-05T01:00, hehuiyuan1, 22, 5])
>  
>  
> > +I(+I[2022-03-05T02:00, 2022-03-05T03:00, hehuiyuan1, 22, 2]) {code}
>  
> The window result  is lost when `hehuiyuan1,22,woman,2022-03-05 00:59:20.100` 
>  arrived, the lateTrigger is not trigger and the window[0:00:00 ,1:00:00] is 
> cleaned when the data `hehuiyuan1,22,woman,2022-03-05 02:00:00.000` arrived 
> that updated watermark.
>  
> The window[0:00:00 ,1:00:00]   has 6 pieces of data, but we got 5.
> The trigger is AfterEndOfWindowEarlyAndLate .
> So WindowOpearator may need to emit reuslt when the window cleanupTimer call 
> onEventTime.
>  
> I think the correct result is as follows:
> {code:java}
> > +I(+I[2022-03-05T00:00, 2022-03-05T01:00, hehuiyuan1, 22, 2])
> > -U(-U[2022-03-05T00:00, 2022-03-05T01:00, hehuiyuan1, 22, 2])
> > +U(+U[2022-03-05T00:00, 2022-03-05T01:00, hehuiyuan1, 22, 4])
> > +I(+I[2022-03-05T01:00, 2022-03-05T02:00, hehuiyuan1, 22, 1])
> > -U(-U[2022-03-05T00:00, 2022-03-05T01:00, hehuiyuan1, 22, 4])
> > +U(+U[2022-03-05T00:00, 2022-03-05T01:00, hehuiyuan1, 22, 5])
> > -U(-U[2022-03-05T00:00, 2022-03-05T01:00, hehuiyuan1, 22, 5])
> > +U(+U[2022-03-05T00:00, 2022-03-05T01:00, hehuiyuan1, 22, 6])
> > +I(+I[2022-03-05T02:00, 2022-03-05T03:00, hehuiyuan1, 22, 2]) {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] slinkydeveloper commented on a change in pull request #19020: [FLINK-26551][table] Flip legacy casting to disable by default

2022-03-09 Thread GitBox


slinkydeveloper commented on a change in pull request #19020:
URL: https://github.com/apache/flink/pull/19020#discussion_r822689482



##
File path: 
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaTableITCase.java
##
@@ -299,29 +301,20 @@ public void testKafkaSourceSinkWithMetadata() throws 
Exception {
 + "  %s\n"
 + ")",
 topic, bootstraps, groupId, formatOptions());
-
 tEnv.executeSql(createTable);
 
 String initialValues =
 "INSERT INTO kafka\n"
 + "VALUES\n"
-+ " ('data 1', 1, TIMESTAMP '2020-03-08 13:12:11.123', 
MAP['k1', X'C0FFEE', 'k2', X'BABE'], TRUE),\n"
++ " ('data 1', 1, TIMESTAMP '2020-03-08 13:12:11.123', 
MAP['k1', x'C0FFEE', 'k2', x'BABE01'], TRUE),\n"

Review comment:
   the `BABE01` is unfortunately needed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] (FLINK-24586) SQL functions should return STRING instead of VARCHAR(2000)

2022-03-09 Thread Timo Walther (Jira)


[ https://issues.apache.org/jira/browse/FLINK-24586 ]


Timo Walther deleted comment on FLINK-24586:
--

was (Author: twalthr):
This is indeed a bug. [~nicholasjiang] do you have capacity to fix this missing 
piece?

> SQL functions should return STRING instead of VARCHAR(2000)
> ---
>
> Key: FLINK-24586
> URL: https://issues.apache.org/jira/browse/FLINK-24586
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Ingo Bürk
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> There are some SQL functions which currently return VARCHAR(2000). With more 
> strict CAST behavior from FLINK-24413, this could become an issue.
> The following functions return VARCHAR(2000) and should be changed to return 
> STRING instead:
> * JSON_VALUE
> * JSON_QUERY
> * JSON_OBJECT
> * JSON_ARRAY
> There are also some more functions which should be evaluated:
> * CHR
> * REVERSE
> * SPLIT_INDEX
> * PARSE_URL
> * FROM_UNIXTIME
> * DECODE



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24586) SQL functions should return STRING instead of VARCHAR(2000)

2022-03-09 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503606#comment-17503606
 ] 

Timo Walther commented on FLINK-24586:
--

This is indeed a bug. [~nicholasjiang] do you have capacity to fix this missing 
piece?

> SQL functions should return STRING instead of VARCHAR(2000)
> ---
>
> Key: FLINK-24586
> URL: https://issues.apache.org/jira/browse/FLINK-24586
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Ingo Bürk
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> There are some SQL functions which currently return VARCHAR(2000). With more 
> strict CAST behavior from FLINK-24413, this could become an issue.
> The following functions return VARCHAR(2000) and should be changed to return 
> STRING instead:
> * JSON_VALUE
> * JSON_QUERY
> * JSON_OBJECT
> * JSON_ARRAY
> There are also some more functions which should be evaluated:
> * CHR
> * REVERSE
> * SPLIT_INDEX
> * PARSE_URL
> * FROM_UNIXTIME
> * DECODE



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] matriv commented on a change in pull request #19020: [FLINK-26551][table] Flip legacy casting to disable by default

2022-03-09 Thread GitBox


matriv commented on a change in pull request #19020:
URL: https://github.com/apache/flink/pull/19020#discussion_r822684289



##
File path: 
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaTableITCase.java
##
@@ -299,29 +301,20 @@ public void testKafkaSourceSinkWithMetadata() throws 
Exception {
 + "  %s\n"
 + ")",
 topic, bootstraps, groupId, formatOptions());
-
 tEnv.executeSql(createTable);
 
 String initialValues =
 "INSERT INTO kafka\n"
 + "VALUES\n"
-+ " ('data 1', 1, TIMESTAMP '2020-03-08 13:12:11.123', 
MAP['k1', X'C0FFEE', 'k2', X'BABE'], TRUE),\n"
++ " ('data 1', 1, TIMESTAMP '2020-03-08 13:12:11.123', 
MAP['k1', x'C0FFEE', 'k2', x'BABE01'], TRUE),\n"

Review comment:
   This is not needed, please revert.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19021: [hotfix][docs] Clarify semantic of tolerable checkpoint failure number

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19021:
URL: https://github.com/apache/flink/pull/19021#issuecomment-1062847721


   
   ## CI report:
   
   * 3dd50180ec315c1375a79e0017928dd9e4d37b06 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32745)
 
   * 8a14a4d32b8a2f69426d0faa7fb1e7d18e809475 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26349) AvroParquetReaders does not work with ReflectData

2022-03-09 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503603#comment-17503603
 ] 

Jing Ge commented on FLINK-26349:
-

The user schema in AvroParquetRecordFormatTest is defined only for Avro 
GenericRecord. In order to make it support ReflectData read, a namespace is 
required, so that the program could find the class to do reflection. 

I have updated the user schema and add one UT to cover this case. Thanks for 
pointing out it.

 FYI 
parquet file created by the user schema has the following meta:


{code:java}
creator:parquet-mr version 1.12.2 (build 
77e30c8093386ec52c3cfa6c34b7ef3321322c94)
extra:  parquet.avro.schema = 
{"type":"record","name":"User","fields":[{"name":"name","type":"string"},{"name":"favoriteNumber","type":["int","null"]},{"name":"favoriteColor","type":["string","null"]}]}
extra:  writer.model.name = avro

file schema:  User

name:   REQUIRED BINARY L:STRING R:0 D:0
favoriteNumber: OPTIONAL INT32 R:0 D:1
favoriteColor:  OPTIONAL BINARY L:STRING R:0 D:1

row group 1:RC:3 TS:143 OFFSET:4

name:BINARY UNCOMPRESSED DO:0 FPO:4 SZ:47/47/1.00 VC:3 
ENC:PLAIN,BIT_PACKED ST:[min: Jack, max: Tom, num_nulls: 0]
favoriteNumber:  INT32 UNCOMPRESSED DO:0 FPO:51 SZ:41/41/1.00 VC:3 
ENC:RLE,PLAIN,BIT_PACKED ST:[min: 1, max: 3, num_nulls: 0]
favoriteColor:   BINARY UNCOMPRESSED DO:0 FPO:92 SZ:55/55/1.00 VC:3 
ENC:RLE,PLAIN,BIT_PACKED ST:[min: green, max: yellow, num_nulls: 0]
{code}

parquet file created by Datum POJO class has the following meta:

{code:java}
creator: parquet-mr version 1.12.2 (build 
77e30c8093386ec52c3cfa6c34b7ef3321322c94)
extra:   parquet.avro.schema = 
{"type":"record","name":"Datum","namespace":"org.apache.flink.formats.parquet.avro","fields":[{"name":"a","type":"string"},{"name":"b","type":"int"}]}
extra:   writer.model.name = avro

file schema: org.apache.flink.formats.parquet.avro.Datum

a:   REQUIRED BINARY L:STRING R:0 D:0
b:   REQUIRED INT32 R:0 D:0

row group 1: RC:3 TS:73 OFFSET:4

a:BINARY UNCOMPRESSED DO:0 FPO:4 SZ:38/38/1.00 VC:3 
ENC:PLAIN,BIT_PACKED ST:[min: a, max: c, num_nulls: 0]
b:INT32 UNCOMPRESSED DO:0 FPO:42 SZ:35/35/1.00 VC:3 
ENC:PLAIN,BIT_PACKED ST:[min: 1, max: 3, num_nulls: 0]
{code}



> AvroParquetReaders does not work with ReflectData
> -
>
> Key: FLINK-26349
> URL: https://issues.apache.org/jira/browse/FLINK-26349
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.15.0
>Reporter: Dawid Wysakowicz
>Assignee: Jing Ge
>Priority: Critical
> Fix For: 1.15.0
>
>
> I tried to change the {{AvroParquetFileReadITCase}} to read the data as 
> {{ReflectData}} and I stumbled on a problem. The scenario is that I use exact 
> same code for writing parquet files, but changed the reading part to:
> {code}
> public static final class User {
> private final String name;
> private final Integer favoriteNumber;
> private final String favoriteColor;
> public User(String name, Integer favoriteNumber, String 
> favoriteColor) {
> this.name = name;
> this.favoriteNumber = favoriteNumber;
> this.favoriteColor = favoriteColor;
> }
> }
> final FileSource source =
> FileSource.forRecordStreamFormat(
> 
> AvroParquetReaders.forReflectRecord(User.class),
> 
> Path.fromLocalFile(TEMPORARY_FOLDER.getRoot()))
> .monitorContinuously(Duration.ofMillis(5))
> .build();
> {code}
> I get an error:
> {code}
> 819020 [flink-akka.actor.default-dispatcher-9] DEBUG 
> org.apache.flink.runtime.jobmaster.JobMaster [] - Archive local failure 
> causing attempt cc9f5e814ea9a3a5b397018dbffcb6a9 to fail: 
> com.esotericsoftware.kryo.KryoException: 
> java.lang.UnsupportedOperationException
> Serialization trace:
> reserved (org.apache.avro.Schema$Field)
> fieldMap (org.apache.avro.Schema$RecordSchema)
> schema (org.apache.avro.generic.GenericData$Record)
>   at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
>   at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>   at com.esotericsoftw

[jira] [Comment Edited] (FLINK-26349) AvroParquetReaders does not work with ReflectData

2022-03-09 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503603#comment-17503603
 ] 

Jing Ge edited comment on FLINK-26349 at 3/9/22, 2:04 PM:
--

The user schema in AvroParquetRecordFormatTest is defined only for Avro 
GenericRecord. In order to make it support ReflectData read, a namespace is 
required, so that the program could find the class to do reflection.

I have updated the user schema and add one UT to cover this case. Thanks for 
pointing it out.

 FYI 
parquet file created by the user schema has the following meta:
{code:java}
creator:parquet-mr version 1.12.2 (build 
77e30c8093386ec52c3cfa6c34b7ef3321322c94)
extra:  parquet.avro.schema = 
{"type":"record","name":"User","fields":[{"name":"name","type":"string"},{"name":"favoriteNumber","type":["int","null"]},{"name":"favoriteColor","type":["string","null"]}]}
extra:  writer.model.name = avro

file schema:  User

name:   REQUIRED BINARY L:STRING R:0 D:0
favoriteNumber: OPTIONAL INT32 R:0 D:1
favoriteColor:  OPTIONAL BINARY L:STRING R:0 D:1

row group 1:RC:3 TS:143 OFFSET:4

name:BINARY UNCOMPRESSED DO:0 FPO:4 SZ:47/47/1.00 VC:3 
ENC:PLAIN,BIT_PACKED ST:[min: Jack, max: Tom, num_nulls: 0]
favoriteNumber:  INT32 UNCOMPRESSED DO:0 FPO:51 SZ:41/41/1.00 VC:3 
ENC:RLE,PLAIN,BIT_PACKED ST:[min: 1, max: 3, num_nulls: 0]
favoriteColor:   BINARY UNCOMPRESSED DO:0 FPO:92 SZ:55/55/1.00 VC:3 
ENC:RLE,PLAIN,BIT_PACKED ST:[min: green, max: yellow, num_nulls: 0]
{code}
parquet file created by Datum POJO class has the following meta:
{code:java}
creator: parquet-mr version 1.12.2 (build 
77e30c8093386ec52c3cfa6c34b7ef3321322c94)
extra:   parquet.avro.schema = 
{"type":"record","name":"Datum","namespace":"org.apache.flink.formats.parquet.avro","fields":[{"name":"a","type":"string"},{"name":"b","type":"int"}]}
extra:   writer.model.name = avro

file schema: org.apache.flink.formats.parquet.avro.Datum

a:   REQUIRED BINARY L:STRING R:0 D:0
b:   REQUIRED INT32 R:0 D:0

row group 1: RC:3 TS:73 OFFSET:4

a:BINARY UNCOMPRESSED DO:0 FPO:4 SZ:38/38/1.00 VC:3 
ENC:PLAIN,BIT_PACKED ST:[min: a, max: c, num_nulls: 0]
b:INT32 UNCOMPRESSED DO:0 FPO:42 SZ:35/35/1.00 VC:3 
ENC:PLAIN,BIT_PACKED ST:[min: 1, max: 3, num_nulls: 0]
{code}


was (Author: jingge):
The user schema in AvroParquetRecordFormatTest is defined only for Avro 
GenericRecord. In order to make it support ReflectData read, a namespace is 
required, so that the program could find the class to do reflection. 

I have updated the user schema and add one UT to cover this case. Thanks for 
pointing out it.

 FYI 
parquet file created by the user schema has the following meta:


{code:java}
creator:parquet-mr version 1.12.2 (build 
77e30c8093386ec52c3cfa6c34b7ef3321322c94)
extra:  parquet.avro.schema = 
{"type":"record","name":"User","fields":[{"name":"name","type":"string"},{"name":"favoriteNumber","type":["int","null"]},{"name":"favoriteColor","type":["string","null"]}]}
extra:  writer.model.name = avro

file schema:  User

name:   REQUIRED BINARY L:STRING R:0 D:0
favoriteNumber: OPTIONAL INT32 R:0 D:1
favoriteColor:  OPTIONAL BINARY L:STRING R:0 D:1

row group 1:RC:3 TS:143 OFFSET:4

name:BINARY UNCOMPRESSED DO:0 FPO:4 SZ:47/47/1.00 VC:3 
ENC:PLAIN,BIT_PACKED ST:[min: Jack, max: Tom, num_nulls: 0]
favoriteNumber:  INT32 UNCOMPRESSED DO:0 FPO:51 SZ:41/41/1.00 VC:3 
ENC:RLE,PLAIN,BIT_PACKED ST:[min: 1, max: 3, num_nulls: 0]
favoriteColor:   BINARY UNCOMPRESSED DO:0 FPO:92 SZ:55/55/1.00 VC:3 
ENC:RLE,PLAIN,BIT_PACKED ST:[min: green, max: yellow, num_nulls: 0]
{code}

parquet file created by Datum POJO class has the following meta:

{code:java}
creator: parquet-mr version 1.12.2 (build 
77e30c8093386ec52c3cfa6c34b7ef3321322c94)
extra:   parquet.avro.schema = 
{"type":"record","name":"Datum","namespace":"org.apache.flink.formats.parquet.avro","fields":[{"name":"a","type":"string"},{"name":"b","type":"int"}]}
extra:   writer.model.name = avro

file schema: org.apache.flink.formats.parquet.avro.Datum

a:   REQUIRED BINARY L:STRING R:0 D:0
b:   REQUIRED I

[jira] [Commented] (FLINK-26322) Test FileSink compaction manually

2022-03-09 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503602#comment-17503602
 ] 

Alexander Preuss commented on FLINK-26322:
--

I tested the FileSink's compaction and here are my observations for the 
scenarios:
 # Working as expected
 # Working as expected
 # Throws  java.lang.IllegalStateException: Illegal committable to compact, 
pending file is null
 # Does not finish/seems to be stuck. No logs after the job has been submitted

> Test FileSink compaction manually
> -
>
> Key: FLINK-26322
> URL: https://issues.apache.org/jira/browse/FLINK-26322
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Alexander Preuss
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.15.0
>
>
> Documentation of compaction on FileSink: 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/filesystem/#compaction]
> Possible scenarios might include
>  # Enable compaction with file-size based compaction strategy.
>  # Enable compaction with number-checkpoints based compaction strategy.
>  # Enable compaction, stop-with-savepoint and restarted with compaction 
> disabled.
>  # Disable compaction, stop-with-savepoint and restarted with compaction 
> enabled.
> For each scenario, it might need to verify that
>  # No repeat and missed records.
>  # The resulted files' size exceeds the specified condition.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #19019: [FLINK-26547][coordination] No requirement adjustments for unmatched slots

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19019:
URL: https://github.com/apache/flink/pull/19019#issuecomment-1062795247


   
   ## CI report:
   
   * 79b8bf5bb7f39b00e94a2ef7b921a046e90be9b2 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32734)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19001: [FLINK-26520][table] Implement SEARCH operator in codegen

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19001:
URL: https://github.com/apache/flink/pull/19001#issuecomment-1060869402


   
   ## CI report:
   
   * 6ef2e345162e0e64ad1bddfe38e0c3022ab378fa Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32733)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18980: [FLINK-26421] Use only EnvironmentSettings to configure the environment

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18980:
URL: https://github.com/apache/flink/pull/18980#issuecomment-1059275933


   
   ## CI report:
   
   * 679718e76727607cb4a95ade98717b3f8769b054 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32722)
 
   * 1aadc0dbc7f297ab864e6dee57b1df74a1359b1e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32755)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp commented on a change in pull request #18979: [FLINK-26484][fs] FileSystem.delete is not implemented consistently

2022-03-09 Thread GitBox


XComp commented on a change in pull request #18979:
URL: https://github.com/apache/flink/pull/18979#discussion_r822673666



##
File path: 
flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/flink/fs/s3presto/FlinkS3PrestoFileSystem.java
##
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.fs.s3presto;
+
+import org.apache.flink.core.fs.FileStatus;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.fs.s3.common.FlinkS3FileSystem;
+import org.apache.flink.fs.s3.common.writer.S3AccessHelper;
+import org.apache.flink.util.ExceptionUtils;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.nio.file.DirectoryNotEmptyException;
+import java.util.Optional;
+
+/**
+ * {@code FlinkS3PrestoFileSystem} provides custom recursive deletion 
functionality to work around a
+ * bug in the internally used Presto file system.
+ *
+ * https://github.com/prestodb/presto/issues/17416
+ */
+public class FlinkS3PrestoFileSystem extends FlinkS3FileSystem {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(FlinkS3PrestoFileSystem.class);
+
+/**
+ * Creates a {@code FlinkS3PrestoFileSystem} based on the given Hadoop S3 
file system. The given
+ * Hadoop file system object is expected to be initialized already.
+ *
+ * This constructor additionally configures the entropy injection for 
the file system.
+ *
+ * @param hadoopS3FileSystem The Hadoop FileSystem that will be used under 
the hood.
+ * @param entropyInjectionKey The substring that will be replaced by 
entropy or removed.
+ * @param entropyLength The number of random alphanumeric characters to 
inject as entropy.
+ */
+public FlinkS3PrestoFileSystem(
+FileSystem hadoopS3FileSystem,
+String localTmpDirectory,
+@Nullable String entropyInjectionKey,
+int entropyLength,
+@Nullable S3AccessHelper s3UploadHelper,
+long s3uploadPartSize,
+int maxConcurrentUploadsPerStream) {
+super(
+hadoopS3FileSystem,
+localTmpDirectory,
+entropyInjectionKey,
+entropyLength,
+s3UploadHelper,
+s3uploadPartSize,
+maxConcurrentUploadsPerStream);
+}
+
+@Override
+public boolean delete(Path path, boolean recursive) throws IOException {
+if (recursive) {
+deleteRecursively(path);
+} else {
+if (isDirectoryWithContent(path)) {
+throw new DirectoryNotEmptyException(path.getPath());
+}
+
+deleteObject(path);
+}
+
+return true;
+}
+
+private void deleteRecursively(Path path) throws IOException {
+final FileStatus[] containingFiles = listStatus(path);
+if (containingFiles == null) {
+LOG.warn(
+"No files could be retrieved even though the path was 
marked as existing. "
++ "This is an indication for a bug in the 
underlying FileSystem "
++ "implementation and should be reported. It won't 
affect the "
++ "processing of this run, though.");

Review comment:
   fair enough: The contract of the HadoopFileSystem.listStatus states that 
it should never return `null` (see 
[JavaDoc](https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/fs/FileSystem.html#listStatus-org.apache.hadoop.fs.Path-)).
 Therefore, doing a null check as a precondition should be good enough. 👍 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18980: [FLINK-26421] Use only EnvironmentSettings to configure the environment

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18980:
URL: https://github.com/apache/flink/pull/18980#issuecomment-1059275933


   
   ## CI report:
   
   * 91046dc10a250f5547cee5478c0b13f14d3903a2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32549)
 
   * 679718e76727607cb4a95ade98717b3f8769b054 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32722)
 
   * 1aadc0dbc7f297ab864e6dee57b1df74a1359b1e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp commented on a change in pull request #18979: [FLINK-26484][fs] FileSystem.delete is not implemented consistently

2022-03-09 Thread GitBox


XComp commented on a change in pull request #18979:
URL: https://github.com/apache/flink/pull/18979#discussion_r82266



##
File path: 
flink-core/src/test/java/org/apache/flink/core/fs/FileSystemBehaviorTestSuite.java
##
@@ -97,6 +99,92 @@ public void testHomeAndWorkDir() {
 assertEquals(fs.getUri().getScheme(), 
fs.getWorkingDirectory().toUri().getScheme());
 assertEquals(fs.getUri().getScheme(), 
fs.getHomeDirectory().toUri().getScheme());
 }
+// --- exists
+
+@Test
+public void testFileExists() throws IOException {
+final Path filePath = createRandomFileInDirectory(basePath);
+assertTrue(fs.exists(filePath));
+}
+
+@Test
+public void testFileDoesNotExist() throws IOException {
+assertFalse(fs.exists(new Path(basePath, randomName(;
+}
+
+// --- delete
+
+@Test
+public void testExistingFileDeletion() throws IOException {
+testSuccessfulDeletion(createRandomFileInDirectory(basePath), false);
+}
+
+@Test
+public void testExistingFileRecursiveDeletion() throws IOException {
+testSuccessfulDeletion(createRandomFileInDirectory(basePath), true);
+}
+
+@Test
+public void testNotExistingFileDeletion() throws IOException {
+testSuccessfulDeletion(new Path(basePath, randomName()), false);
+}
+
+@Test
+public void testNotExistingFileRecursiveDeletion() throws IOException {
+testSuccessfulDeletion(new Path(basePath, randomName()), true);
+}
+
+@Test
+public void testExistingEmptyDirectoryDeletion() throws IOException {
+final Path path = new Path(basePath, "dir-" + randomName());
+fs.mkdirs(path);
+testSuccessfulDeletion(path, false);
+}
+
+@Test
+public void testExistingEmptyDirectoryRecursiveDeletion() throws 
IOException {
+final Path path = new Path(basePath, "dir-" + randomName());

Review comment:
   no, it just helped me with understanding the paths when debugging test 
failures. But I can remove it




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19023: [FLINK-25705][docs]Translate "Metric Reporters" page of "Deployment" …

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19023:
URL: https://github.com/apache/flink/pull/19023#issuecomment-1062931101


   
   ## CI report:
   
   * a7bad35a1f8622c488accc9b0ecda1e56b2d9c64 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32754)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] ChengkaiYang2022 commented on pull request #19023: [FLINK-25705][docs]Translate "Metric Reporters" page of "Deployment" …

2022-03-09 Thread GitBox


ChengkaiYang2022 commented on pull request #19023:
URL: https://github.com/apache/flink/pull/19023#issuecomment-1062932807


   Hi,@RocMarshal, can you help me to review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-26507) Last state upgrade mode should allow reconciliation regardless of job and deployment status

2022-03-09 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-26507.
--
Resolution: Fixed

merged: a738c815a12fc647d24cbbd10a2f4f0a5b9f6900

> Last state upgrade mode should allow reconciliation regardless of job and 
> deployment status
> ---
>
> Key: FLINK-26507
> URL: https://issues.apache.org/jira/browse/FLINK-26507
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
>
> Currently there is a strict check for both deployment readyness and 
> successful listing of jobs before we allow any reconciliation.
> While the status should be updated we should allow reconciliation of jobs 
> with last state upgrade mode regardless of the deployment/job status as this 
> mode does not require cluster interactions to execute upgrade and suspend 
> operations



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #19023: [FLINK-25705][docs]Translate "Metric Reporters" page of "Deployment" …

2022-03-09 Thread GitBox


flinkbot commented on pull request #19023:
URL: https://github.com/apache/flink/pull/19023#issuecomment-1062931101


   
   ## CI report:
   
   * a7bad35a1f8622c488accc9b0ecda1e56b2d9c64 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26528) Trigger the updateControl when the FlinkDeployment have changed

2022-03-09 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503593#comment-17503593
 ] 

Gyula Fora commented on FLINK-26528:


I think we are not trying to change the reschedule behaviour here, only whether 
we signal status change or not toward kubernetes

> Trigger the updateControl when the FlinkDeployment have changed
> ---
>
> Key: FLINK-26528
> URL: https://issues.apache.org/jira/browse/FLINK-26528
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Aitozi
>Priority: Major
>
> If the CR has not changed since last reconcile, we could create a 
> UpdateControl with {{UpdateControl#noUpdate}} , this is meant to reduce the 
> unnecessary update for resource 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25705) Translate "Metric Reporters" page of "Deployment" in to Chinese

2022-03-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25705:
---
Labels: auto-unassigned pull-request-available  (was: auto-unassigned)

> Translate "Metric Reporters" page of "Deployment" in to Chinese
> ---
>
> Key: FLINK-25705
> URL: https://issues.apache.org/jira/browse/FLINK-25705
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Chengkai Yang
>Assignee: Chengkai Yang
>Priority: Minor
>  Labels: auto-unassigned, pull-request-available
>
> The page url is 
> [https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/metric_reporters]
> The markdown file is located in 
> flink/docs/content.zh/docs/deployment/metric_reporters.md
> This issue should be merged after Flink-25830 and 
> [FLINK-26222|https://issues.apache.org/jira/browse/FLINK-26222]is merged or 
> solved.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] ChengkaiYang2022 opened a new pull request #19023: [FLINK-25705][docs]Translate "Metric Reporters" page of "Deployment" …

2022-03-09 Thread GitBox


ChengkaiYang2022 opened a new pull request #19023:
URL: https://github.com/apache/flink/pull/19023


   …in to Chinese
   
   
   
   ## What is the purpose of the change
   
   Translate "Metric Reporters" page of "Deployment" in to Chinese.
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature?  no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26473) Observer should support JobManager deployment crashed or deleted externally

2022-03-09 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503592#comment-17503592
 ] 

Gyula Fora commented on FLINK-26473:


I would say this is not really a duplicate anymore, give that the recent 
changes did not cover this scenario :) 

> Observer should support JobManager deployment crashed or deleted externally
> ---
>
> Key: FLINK-26473
> URL: https://issues.apache.org/jira/browse/FLINK-26473
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Priority: Major
>
> Follow the discussion in this PR 
> [https://github.com/apache/flink-kubernetes-operator/pull/26#discussion_r817514763.]
>  
> Currently, the {{observeJmDeployment}} still could not cover some scenarios, 
> e.g. JobManager deployment crashed, JobManager deployment was deleted 
> externally. When it {{JobManagerDeploymentStatus}} comes to {{{}READY{}}}, it 
> will always be {{{}READY{}}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #19020: [FLINK-26551][table] Flip legacy casting to disable by default

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19020:
URL: https://github.com/apache/flink/pull/19020#issuecomment-1062833750


   
   ## CI report:
   
   * 91b6e51ce99dc6f74161a657897a6f8476ad68b0 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32742)
 
   * e16c14406fad0e2693e9e86f21ba3376e3ba743e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32753)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18739: [FLINK-26030][yarn] Set FLINK_LIB_DIR to 'lib' under working dir in YARN containers

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18739:
URL: https://github.com/apache/flink/pull/18739#issuecomment-1037747523


   
   ## CI report:
   
   * 2b4ab65d7d448a28cc214e4096bb188593b7da6f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32729)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-26454) Improve operator logging

2022-03-09 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-26454:
--

Assignee: Nicholas Jiang

> Improve operator logging
> 
>
> Key: FLINK-26454
> URL: https://issues.apache.org/jira/browse/FLINK-26454
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Nicholas Jiang
>Priority: Major
>
> At the moment the way information is logged throughout the operator is very 
> inconsistent. Some parts log the name of the deployment, some the name + 
> namespace, some neither of these.
> We should try to clean this up and unify it across the operator.
> I see basically 2 possible ways:
>  1. Add a log formatter utility to always attach name + namespace information 
> to each logged line
>  2. Remove namespace + name from everywhere and extract this as part of the 
> logger setting from MDC information the operator sdk already provides 
> ([https://javaoperatorsdk.io/docs/features)]
> We should discuss this on the mailing list as part of this work



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26552) Try to use @EnableKubernetesMockClient(crud = true) in controller test

2022-03-09 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-26552:
--

 Summary: Try to use @EnableKubernetesMockClient(crud = true) in 
controller test
 Key: FLINK-26552
 URL: https://issues.apache.org/jira/browse/FLINK-26552
 Project: Flink
  Issue Type: Sub-task
  Components: Kubernetes Operator
Reporter: Gyula Fora


The controller test currently uses the KubernetesMockserver directly.

As [~wangyang0918] pointed out we could try using 
@EnableKubernetesMockClient(crud = true) like in the FlinkService test.

At my initial attempt I ran into some problems with event creation and listing 
them back.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26551) Make the legacy behavior disabled by default

2022-03-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26551:
---
Labels: pull-request-available  (was: )

> Make the legacy behavior disabled by default
> 
>
> Key: FLINK-26551
> URL: https://issues.apache.org/jira/browse/FLINK-26551
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Francesco Guardiani
>Assignee: Francesco Guardiani
>Priority: Major
>  Labels: pull-request-available
>
> Followup of https://issues.apache.org/jira/browse/FLINK-25111
> For the discussion, see 
> https://lists.apache.org/thread/r13y3plwwyg3sngh8cz47flogq621txv



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #19020: [FLINK-26551][table] Flip legacy casting to disable by default

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19020:
URL: https://github.com/apache/flink/pull/19020#issuecomment-1062833750


   
   ## CI report:
   
   * 91b6e51ce99dc6f74161a657897a6f8476ad68b0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32742)
 
   * e16c14406fad0e2693e9e86f21ba3376e3ba743e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19018: [FLINK-25819][runtime] Reordered requesting and recycling buffers in order to avoid race condition in testIsAvailableOrNotAfterReques

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19018:
URL: https://github.com/apache/flink/pull/19018#issuecomment-1062745397


   
   ## CI report:
   
   * 33be62479b9cdc58021afa8cfb68bd87355b5f26 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32726)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18987: [FLINK-26494][runtime] Cleanup does not reveal exceptions

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18987:
URL: https://github.com/apache/flink/pull/18987#issuecomment-1059995435


   
   ## CI report:
   
   * 1ba1816cfb7266ee178e8bf815aa79daac42a026 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32621)
 
   * 2175ee842d81b35cff07f0524c154c240fd4ec0a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32752)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18949: [FLINK-25235][tests] re-enable zookeeper test

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18949:
URL: https://github.com/apache/flink/pull/18949#issuecomment-1055620167


   
   ## CI report:
   
   * b1cd905439ca2922134e7884aa8adf12261eae7b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32728)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18987: [FLINK-26494][runtime] Cleanup does not reveal exceptions

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18987:
URL: https://github.com/apache/flink/pull/18987#issuecomment-1059995435


   
   ## CI report:
   
   * 1ba1816cfb7266ee178e8bf815aa79daac42a026 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32621)
 
   * 2175ee842d81b35cff07f0524c154c240fd4ec0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-26551) Make the legacy behavior disabled by default

2022-03-09 Thread Francesco Guardiani (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Guardiani reassigned FLINK-26551:
---

Assignee: Francesco Guardiani

> Make the legacy behavior disabled by default
> 
>
> Key: FLINK-26551
> URL: https://issues.apache.org/jira/browse/FLINK-26551
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Francesco Guardiani
>Assignee: Francesco Guardiani
>Priority: Major
>
> Followup of https://issues.apache.org/jira/browse/FLINK-25111
> For the discussion, see 
> https://lists.apache.org/thread/r13y3plwwyg3sngh8cz47flogq621txv



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26551) Make the legacy behavior disabled by default

2022-03-09 Thread Francesco Guardiani (Jira)
Francesco Guardiani created FLINK-26551:
---

 Summary: Make the legacy behavior disabled by default
 Key: FLINK-26551
 URL: https://issues.apache.org/jira/browse/FLINK-26551
 Project: Flink
  Issue Type: Sub-task
Reporter: Francesco Guardiani


Followup of https://issues.apache.org/jira/browse/FLINK-25111

For the discussion, see 
https://lists.apache.org/thread/r13y3plwwyg3sngh8cz47flogq621txv



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #19022: [FLINK-26550][checkpoint] Correct the information of checkpoint failure

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19022:
URL: https://github.com/apache/flink/pull/19022#issuecomment-1062855606


   
   ## CI report:
   
   * e2a3ce71c31c2bcc1ac90697f1a08bf4fd0b3c82 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32746)
 
   * 84a227bab9edfc98a6ee7318377e609d0c0ac164 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32751)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18996: [FLINK-26492][metric] deprecate umRecordsOutErrorsCounter

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18996:
URL: https://github.com/apache/flink/pull/18996#issuecomment-1060587342


   
   ## CI report:
   
   * 473e98ad1d65e474a9b1841931d71ced4d11e62a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32620)
 
   * c6f6997bf5e337e02855e4f0f4750c5c6e542339 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32750)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp commented on pull request #18987: [FLINK-26494][runtime] Cleanup does not reveal exceptions

2022-03-09 Thread GitBox


XComp commented on pull request #18987:
URL: https://github.com/apache/flink/pull/18987#issuecomment-1062911725


   Thanks for your review, @zentol . I addressed your comments. PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 46f5d666a22e3a489b1dfce4506a778c49485f12 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32731)
 
   * d22ead8681c8c8797eec9b8e2c561f4bb2625fa1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32749)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25771) CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP

2022-03-09 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503584#comment-17503584
 ] 

Etienne Chauchot commented on FLINK-25771:
--

Thanks [~chesnay] for merging to master and also for the backports.

> CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP
> -
>
> Key: FLINK-25771
> URL: https://issues.apache.org/jira/browse/FLINK-25771
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.15.0, 1.13.5, 1.14.3
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.13.7, 1.14.5
>
>
> The test {{CassandraConnectorITCase.testRetrialAndDropTables}} fails on AZP 
> with
> {code}
> Jan 23 01:02:52 com.datastax.driver.core.exceptions.NoHostAvailableException: 
> All host(s) tried for query failed (tried: /172.17.0.1:59220 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: 
> [/172.17.0.1] Timed out waiting for server response))
> Jan 23 01:02:52   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> Jan 23 01:02:52   at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testRetrialAndDropTables(CassandraConnectorITCase.java:554)
> Jan 23 01:02:52   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jan 23 01:02:52   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jan 23 01:02:52   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jan 23 01:02:52   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 23 01:02:52   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jan 23 01:02:52   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jan 23 01:02:52   at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:196)
> Jan 23 01:02:52   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jan 23 01:02:52   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jan 23 01:02:52   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jan 23 01:02:52   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jan 23 01:02:52   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jan 23 01:02:52   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Jan 23 01:02:52   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Jan 23 01:02:52   at org.junit.run

[GitHub] [flink] XComp commented on a change in pull request #18987: [FLINK-26494][runtime] Cleanup does not reveal exceptions

2022-03-09 Thread GitBox


XComp commented on a change in pull request #18987:
URL: https://github.com/apache/flink/pull/18987#discussion_r822635906



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/FileSystemJobResultStore.java
##
@@ -165,8 +171,16 @@ public boolean hasCleanJobResultEntryInternal(JobID jobId) 
throws IOException {
 
 @Override
 public Set getDirtyResultsInternal() throws IOException {
+final FileStatus[] statuses = fileSystem.listStatus(this.basePath);
+if (statuses == null) {
+LOG.warn(
+"The JobResultStore directory '"
++ basePath
++ "' was deleted. No persisted JobResults could be 
recovered.");
+return Collections.emptySet();

Review comment:
   One could also argue that it should throw an `IllegalStateException` and 
fail fatally. WDYT? 🤔 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] echauchot commented on pull request #18973: [FLINK-25771][connectors][Cassandra][test] Raise the Cassandra driver timeouts at the Cluster level

2022-03-09 Thread GitBox


echauchot commented on pull request #18973:
URL: https://github.com/apache/flink/pull/18973#issuecomment-1062910144


   @zentol thanks for merging !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp commented on a change in pull request #18987: [FLINK-26494][runtime] Cleanup does not reveal exceptions

2022-03-09 Thread GitBox


XComp commented on a change in pull request #18987:
URL: https://github.com/apache/flink/pull/18987#discussion_r822634591



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/FileSystemJobResultStore.java
##
@@ -165,8 +171,16 @@ public boolean hasCleanJobResultEntryInternal(JobID jobId) 
throws IOException {
 
 @Override
 public Set getDirtyResultsInternal() throws IOException {
+final FileStatus[] statuses = fileSystem.listStatus(this.basePath);
+if (statuses == null) {
+LOG.warn(
+"The JobResultStore directory '"
++ basePath
++ "' was deleted. No persisted JobResults could be 
recovered.");
+return Collections.emptySet();

Review comment:
   because it's an unexpected state of Flink which might lead to weird 
behavior. Some FileSystems would complain about a missing directory whereas 
other filesystem might allow to create a file within this directory anyway 
(e.g. object stores?)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19022: [FLINK-26550][checkpoint] Correct the information of checkpoint failure

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19022:
URL: https://github.com/apache/flink/pull/19022#issuecomment-1062855606


   
   ## CI report:
   
   * e2a3ce71c31c2bcc1ac90697f1a08bf4fd0b3c82 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32746)
 
   * 84a227bab9edfc98a6ee7318377e609d0c0ac164 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18996: [FLINK-26492][metric] deprecate umRecordsOutErrorsCounter

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18996:
URL: https://github.com/apache/flink/pull/18996#issuecomment-1060587342


   
   ## CI report:
   
   * 473e98ad1d65e474a9b1841931d71ced4d11e62a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32620)
 
   * c6f6997bf5e337e02855e4f0f4750c5c6e542339 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] afedulov commented on a change in pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


afedulov commented on a change in pull request #18905:
URL: https://github.com/apache/flink/pull/18905#discussion_r822634017



##
File path: flink-formats/flink-parquet/pom.xml
##
@@ -308,6 +314,33 @@ under the License.



+
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   shade-flink
+   package
+   
+   shade
+   
+   
+   
+   
+   
org.apache.flink:flink-connector-base
+   
+   
+   
+   
+   
org.apache.flink.connector.base
+   
org.apache.flink.connector.kinesis.shaded.org.apache.flink.connector.base

Review comment:
   fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] afedulov commented on a change in pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


afedulov commented on a change in pull request #18905:
URL: https://github.com/apache/flink/pull/18905#discussion_r822633815



##
File path: flink-formats/flink-orc/pom.xml
##
@@ -204,6 +210,33 @@ under the License.



+
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   shade-flink
+   package
+   
+   shade
+   
+   
+   
+   
+   
org.apache.flink:flink-connector-base
+   
+   
+   
+   
+   
org.apache.flink.connector.base
+   
org.apache.flink.connector.kinesis.shaded.org.apache.flink.connector.base

Review comment:
   fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18905: [FLINK-25927][connectors] Make flink-connector-base dependency usage consistent across all connectors

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #18905:
URL: https://github.com/apache/flink/pull/18905#issuecomment-1049717092


   
   ## CI report:
   
   * 414adf3bf2aa1467db7e089b10e18b4e9e349b56 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32587)
 
   * 46f5d666a22e3a489b1dfce4506a778c49485f12 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32731)
 
   * d22ead8681c8c8797eec9b8e2c561f4bb2625fa1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp commented on a change in pull request #18987: [FLINK-26494][runtime] Cleanup does not reveal exceptions

2022-03-09 Thread GitBox


XComp commented on a change in pull request #18987:
URL: https://github.com/apache/flink/pull/18987#discussion_r822632235



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/cleanup/DefaultResourceCleaner.java
##
@@ -166,21 +188,49 @@ private DefaultResourceCleaner(
 mainThreadExecutor.assertRunningInMainThread();
 
 CompletableFuture cleanupFuture = 
FutureUtils.completedVoidFuture();
-for (T cleanup : prioritizedCleanup) {
-cleanupFuture = cleanupFuture.thenCompose(ignoredValue -> 
withRetry(jobId, cleanup));
+for (Map.Entry cleanup : prioritizedCleanup.entrySet()) {
+cleanupFuture =
+cleanupFuture.thenCompose(
+ignoredValue -> withRetry(jobId, cleanup.getKey(), 
cleanup.getValue()));
 }
 
 return cleanupFuture.thenCompose(
 ignoredValue ->
 FutureUtils.completeAll(
-regularCleanup.stream()
-.map(cleanup -> withRetry(jobId, 
cleanup))
+regularCleanup.entrySet().stream()
+.map(
+cleanupEntry ->
+withRetry(
+jobId,
+
cleanupEntry.getKey(),
+
cleanupEntry.getValue()))
 .collect(Collectors.toList(;
 }
 
-private CompletableFuture withRetry(JobID jobId, T cleanup) {
+private CompletableFuture withRetry(JobID jobId, String label, T 
cleanup) {
 return FutureUtils.retryWithDelay(
-() -> cleanupFn.cleanupAsync(cleanup, jobId, cleanupExecutor),
+() ->
+cleanupFn
+.cleanupAsync(cleanup, jobId, cleanupExecutor)
+.whenComplete(

Review comment:
   `whenComplete` only provides a callback for after completion. I did a 
quick test to verify it:
   ```
   @Test
   public void foo() throws ExecutionException, InterruptedException {
   CompletableFuture f = new CompletableFuture<>();
   CompletableFuture result =
   f.whenComplete(
   (success, failure) -> {
   if (failure != null) {
   
System.out.println(failure.getClass().getSimpleName());
   } else {
   System.out.println("No Failure");
   }
   });
   
   f.completeExceptionally(new RuntimeException());
   result.get();
   }
   ```
   
   The code above failed with printing the class name and failing in the end:
   > java.util.concurrent.ExecutionException: java.lang.RuntimeException
   > at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
   > at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
   > [...]
   > at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:221)
   > at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
   > Caused by: java.lang.RuntimeException
   > at 
org.apache.flink.runtime.dispatcher.cleanup.DefaultResourceCleanerTest.foo(DefaultResourceCleanerTest.java:72)
   > ... 67 more




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp commented on a change in pull request #18987: [FLINK-26494][runtime] Cleanup does not reveal exceptions

2022-03-09 Thread GitBox


XComp commented on a change in pull request #18987:
URL: https://github.com/apache/flink/pull/18987#discussion_r822632235



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/cleanup/DefaultResourceCleaner.java
##
@@ -166,21 +188,49 @@ private DefaultResourceCleaner(
 mainThreadExecutor.assertRunningInMainThread();
 
 CompletableFuture cleanupFuture = 
FutureUtils.completedVoidFuture();
-for (T cleanup : prioritizedCleanup) {
-cleanupFuture = cleanupFuture.thenCompose(ignoredValue -> 
withRetry(jobId, cleanup));
+for (Map.Entry cleanup : prioritizedCleanup.entrySet()) {
+cleanupFuture =
+cleanupFuture.thenCompose(
+ignoredValue -> withRetry(jobId, cleanup.getKey(), 
cleanup.getValue()));
 }
 
 return cleanupFuture.thenCompose(
 ignoredValue ->
 FutureUtils.completeAll(
-regularCleanup.stream()
-.map(cleanup -> withRetry(jobId, 
cleanup))
+regularCleanup.entrySet().stream()
+.map(
+cleanupEntry ->
+withRetry(
+jobId,
+
cleanupEntry.getKey(),
+
cleanupEntry.getValue()))
 .collect(Collectors.toList(;
 }
 
-private CompletableFuture withRetry(JobID jobId, T cleanup) {
+private CompletableFuture withRetry(JobID jobId, String label, T 
cleanup) {
 return FutureUtils.retryWithDelay(
-() -> cleanupFn.cleanupAsync(cleanup, jobId, cleanupExecutor),
+() ->
+cleanupFn
+.cleanupAsync(cleanup, jobId, cleanupExecutor)
+.whenComplete(

Review comment:
   `whenComplete` only provides a callback for after completion. I did a 
quick test to verify it:
   ```
   @Test
   public void foo() throws ExecutionException, InterruptedException {
   CompletableFuture f = new CompletableFuture<>();
   CompletableFuture result =
   f.whenComplete(
   (success, failure) -> {
   if (failure != null) {
   
System.out.println(failure.getClass().getSimpleName());
   } else {
   System.out.println("No Failure");
   }
   });
   
   f.completeExceptionally(new RuntimeException());
   result.get();
   }
   ```
   
   The code above failed with printing the class name and failing in the end:
   > java.util.concurrent.ExecutionException: java.lang.RuntimeException
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
   [...]
at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:221)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
   Caused by: java.lang.RuntimeException
at 
org.apache.flink.runtime.dispatcher.cleanup.DefaultResourceCleanerTest.foo(DefaultResourceCleanerTest.java:72)
... 67 more




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingGe commented on a change in pull request #18996: [FLINK-26492][metric] deprecate umRecordsOutErrorsCounter

2022-03-09 Thread GitBox


JingGe commented on a change in pull request #18996:
URL: https://github.com/apache/flink/pull/18996#discussion_r822631316



##
File path: 
flink-tests/src/test/java/org/apache/flink/test/streaming/runtime/SinkMetricsITCase.java
##
@@ -147,7 +147,7 @@ private void assertSinkMetrics(
 * 
MetricWriter.RECORD_SIZE_IN_BYTES)));
 // MetricWriter is just incrementing errors every even record
 assertThat(
-metrics.get(MetricNames.NUM_RECORDS_OUT_ERRORS),
+metrics.get(MetricNames.NUM_RECORDS_SEND_ERRORS),

Review comment:
   You are absolutely right. Thanks!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26394) CheckpointCoordinator.isTriggering can not be reset if a checkpoint expires while the checkpointCoordinator task is queuing in the SourceCoordinator executor.

2022-03-09 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503579#comment-17503579
 ] 

Yun Tang commented on FLINK-26394:
--

Though the PR still has broken tests, I think this problem really exists. cc 
[~becket_qin] 

> CheckpointCoordinator.isTriggering can not be reset if a checkpoint expires 
> while the checkpointCoordinator task is queuing in the SourceCoordinator 
> executor.
> --
>
> Key: FLINK-26394
> URL: https://issues.apache.org/jira/browse/FLINK-26394
> Project: Flink
>  Issue Type: Bug
>Reporter: Gen Luo
>Priority: Major
>  Labels: pull-request-available
>
> We found a job can no longer trigger checkpoints or savepoints after 
> recovering from a checkpoint timeout failure. After investigation, we found 
> that the `isTriggering` flag is CheckpointCoordinator is true while no 
> checkpoint is actually doing, and the root cause is as following:
>  
>  # The job uses a source whose coordinator needs to scan a table while 
> requesting splits, which may cost more than 10min. The source coordinator 
> executor thread will be occupied by `handleSplitRequest`, and 
> `checkpointCoordinator` task of the first checkpoint will be queued after it.
>  # 10min later, the checkpoint is expired, removing the pending checkpoint 
> from the coordinator, and triggering a global failover. But the 
> `isTriggering` is not reset here. It can only be reset after the checkpoint 
> completable future is done, which is now holding only by the 
> `checkpointCoordinator` task in the queue, along with the PendingCheckpoint.
>  # Then the job failover, and the RecreateOnResetOperatorCoordinator will 
> recreate a new SourceCoordinator, and close the previous coordinator 
> asynchronously. Timeout for the closing is fixed to 60s. SourceCoordinator 
> will try to `shutdown` the coordinator executor then `awaitTermination`. If 
> the tasks are done within 60s, nothing wrong will happen.
>  # But if the closing method is stuck for more than 60s (which in this case 
> is actually stuck in the `handleSplitRequest`), the async closing thread will 
> be interrupted and SourceCoordinator will `shutdownNow` the executor. All 
> tasks queuing will be discarded, including the `checkpointCoordinator` task.
>  # Then the checkpoint completable future will never complete and the 
> `isTriggering` flag will never be reset.
>  
> I see that the closing part of SourceCoordinator is recently refactored. But 
> I find the new implementation also has this issue. And since it calls 
> `shutdownNow` directly, the issue should be easier to encounter.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #19017: [FLINK-26542][hive] fix "schema of both sides of union should match" exception with Hive dialect

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19017:
URL: https://github.com/apache/flink/pull/19017#issuecomment-1062742669


   
   ## CI report:
   
   * 6c5a21cd9c95112a97ef3195eac68d6e4899616c Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32744)
 
   * 7d4fa8b9bc0e3c8fce18c6e5c267ef3d644f454e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32748)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19017: [FLINK-26542][hive] fix "schema of both sides of union should match" exception with Hive dialect

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19017:
URL: https://github.com/apache/flink/pull/19017#issuecomment-1062742669


   
   ## CI report:
   
   * 6c5a21cd9c95112a97ef3195eac68d6e4899616c Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32744)
 
   * 7d4fa8b9bc0e3c8fce18c6e5c267ef3d644f454e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19017: [FLINK-26542][hive] fix "schema of both sides of union should match" exception with Hive dialect

2022-03-09 Thread GitBox


flinkbot edited a comment on pull request #19017:
URL: https://github.com/apache/flink/pull/19017#issuecomment-1062742669


   
   ## CI report:
   
   * 6c5a21cd9c95112a97ef3195eac68d6e4899616c Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32744)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




<    1   2   3   4   5   6   7   8   >