[GitHub] [flink] flinkbot edited a comment on pull request #18119: [FLINK-24947] Support hostNetwork for native K8s integration on session mode

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18119:
URL: https://github.com/apache/flink/pull/18119#issuecomment-994734000


   
   ## CI report:
   
   * c15632cf1ee4b38d0060a87c3bedb5cb4d545264 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29851)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25740) PulsarSourceOrderedE2ECase fails on azure

2022-01-21 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-25740:
--
Priority: Critical  (was: Major)

> PulsarSourceOrderedE2ECase fails on azure
> -
>
> Key: FLINK-25740
> URL: https://issues.apache.org/jira/browse/FLINK-25740
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29789&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a&l=16385
> {code}
> [ERROR] Errors:
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testIdleReader:187->SourceTestSuiteBase.gene
>  rateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testIdleReader:187->SourceTestSuiteBase.gene
>  rateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testMultipleSplits:145->SourceTestSuiteBase.
>  generateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testMultipleSplits:145->SourceTestSuiteBase.
>  generateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testSourceSingleSplit:105->SourceTestSuiteBa
>  se.generateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testSourceSingleSplit:105->SourceTestSuiteBa
>  se.generateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testTaskManagerFailure:232 » 
> BrokerPersisten ce
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testTaskManagerFailure:232 » 
> BrokerPersisten ce
> [ERROR]   
> PulsarSourceUnorderedE2ECase>UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers
>  :60 » BrokerPersistence
> [ERROR]   
> PulsarSourceUnorderedE2ECase>UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers
>  :60 » BrokerPersistence
> {code}
> {code}
> 2022-01-20T15:28:37.1467261Z Jan 20 15:28:37 [ERROR] 
> org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase.testOneSplitWithMultipleConsumers(TestEnvironment,
>  ExternalContext)[2]  Time elapsed: 77.698 s  <<< ERROR!
> 2022-01-20T15:28:37.1469146Z Jan 20 15:28:37 
> org.apache.pulsar.client.api.PulsarClientException$BrokerPersistenceException:
>  org.apache.bookkeeper.mledger.ManagedLedgerException: Not enough non-faulty 
> bookies available
> 2022-01-20T15:28:37.1470062Z Jan 20 15:28:37at 
> org.apache.pulsar.client.api.PulsarClientException.unwrap(PulsarClientException.java:985)
> 2022-01-20T15:28:37.1470802Z Jan 20 15:28:37at 
> org.apache.pulsar.client.impl.ProducerBuilderImpl.create(ProducerBuilderImpl.java:95)
> 2022-01-20T15:28:37.1471598Z Jan 20 15:28:37at 
> org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.sendMessages(PulsarRuntimeOperator.java:172)
> 2022-01-20T15:28:37.1472451Z Jan 20 15:28:37at 
> org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.sendMessages(PulsarRuntimeOperator.java:167)
> 2022-01-20T15:28:37.1473307Z Jan 20 15:28:37at 
> org.apache.flink.connector.pulsar.testutils.PulsarPartitionDataWriter.writeRecords(PulsarPartitionDataWriter.java:41)
> 2022-01-20T15:28:37.1474209Z Jan 20 15:28:37at 
> org.apache.flink.tests.util.pulsar.common.UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers(UnorderedSourceTestSuiteBase.java:60)
> 2022-01-20T15:28:37.1474949Z Jan 20 15:28:37at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-20T15:28:37.1475658Z Jan 20 15:28:37at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-20T15:28:37.1476383Z Jan 20 15:28:37at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-20T15:28:37.1477030Z Jan 20 15:28:37at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-20T15:28:37.1477670Z Jan 20 15:28:37at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
> 2022-01-20T15:28:37.1478388Z Jan 20 15:28:37at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18359: [FLINK-25484][connectors/filesystem] Support inactivityInterval config in table api

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18359:
URL: https://github.com/apache/flink/pull/18359#issuecomment-1012981632


   
   ## CI report:
   
   * 82871d3416aefbce86d6e32e3e26a51c12841534 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29768)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] slinkydeveloper commented on a change in pull request #18353: [FLINK-25129][docs]project configuation changes in docs

2022-01-21 Thread GitBox


slinkydeveloper commented on a change in pull request #18353:
URL: https://github.com/apache/flink/pull/18353#discussion_r789423489



##
File path: docs/content/docs/dev/configuration/overview.md
##
@@ -52,46 +52,36 @@ In Maven syntax, it would look like:
 
 {{< tabs "a49d57a4-27ee-4dd3-a2b8-a673b99b011e" >}}
 {{< tab "Java" >}}
-```xml
-
-  org.apache.flink
-  flink-streaming-java
-  {{< version >}}
-  provided
-
-```
+
+{{< artifact flink-streaming-java withProvidedScope >}}
+
 {{< /tab >}}
 {{< tab "Scala" >}}
-```xml
-
-  org.apache.flink
-  flink-streaming-scala{{< scala_version >}}
-  {{< version >}}
-  provided
-
-```
-{{< /tab >}}
-{{< /tabs >}}
 
-**Important:** Note that all these dependencies have their scope set to 
*provided*. This means that
-they are needed to compile against, but that they should not be packaged into 
the project's resulting
-application JAR file. If not set to *provided*, the best case scenario is that 
the resulting JAR
-becomes excessively large, because it also contains all Flink core 
dependencies. The worst case scenario
-is that the Flink core dependencies that are added to the application's JAR 
file clash with some of
-your own dependency versions (which is normally avoided through inverted 
classloading).
+{{< artifact flink-streaming-scala withScalaVersion withProvidedScope >}}
 
-**Note on IntelliJ:** To make the applications run within IntelliJ IDEA, it is 
necessary to tick the
-`Include dependencies with "Provided" scope` box in the run configuration. If 
this option is not available
-(possibly due to using an older IntelliJ IDEA version), then a workaround is 
to create a test that
-calls the application's `main()` method.
+{{< /tab >}}
+{{< /tabs >}}

Review comment:
   Please remove these tabs and replace them with simple tabs for 
maven/gradle/sbt conf like here: 
https://github.com/slinkydeveloper/flink/commit/5d49dd7a0c0b0b824ed72942136a1857aaea91b9#diff-0bf4db953b94c9b897e098765f0ecf359afb3954363dd8c29574dbe3548c7d01R50
   
   Telling me the syntax for the maven dependencies is not really useful here.

##
File path: docs/content/docs/dev/table/sourcesSinks.md
##
@@ -106,6 +106,41 @@ that the planner can handle.
 
 {{< top >}}
 
+
+Project Configuration
+-
+
+If you want to implement a custom format, the following dependency is usually 
sufficient and can be 
+used for JAR files for the SQL Client:
+
+```xml
+
+  org.apache.flink
+  flink-table-common
+  {{< version >}}
+  provided
+
+```
+
+If you want to develop a connector that needs to bridge with DataStream APIs 
(i.e. if you want to adapt
+a DataStream connector to the Table API), you need to add this dependency:
+
+```xml
+
+  org.apache.flink
+  flink-table-api-java-bridge
+  {{< version >}}
+  provided
+
+```

Review comment:
   Use the artifact docgen tag

##
File path: docs/content/docs/dev/configuration/connector.md
##
@@ -0,0 +1,72 @@
+---
+title: "Dependencies: Connectors and Formats"
+weight: 5
+type: docs
+---
+
+
+# Dependencies: Connectors and Formats
+
+Flink can read from and write to various external systems via [connectors]({{< 
ref "docs/connectors/table/overview" >}})
+and define the [format]({{< ref "docs/connectors/table/formats/overview" >}}) 
in which to store the 
+data (i.e. mapping binary data onto table columns).  
+
+The way that the information is serialized is represented in the external 
system and that system needs
+to know how to read this data in a format that can be read by Flink.  This is 
done through format dependencies.
+
+Most applications need specific connectors to run. Flink provides a set of 
table formats that can be 
+used with table connectors (with the dependencies for both being fairly 
unified). These are not part 
+of Flink's core dependencies and must be added as dependencies to the 
application.
+
+## Adding Connector Dependencies 
+
+As an example, you can add the Kafka connector as a dependency like this 
(Maven syntax):
+
+{{< artifact flink-connector-kafka >}}
+
+We recommend packaging the application code and all its required dependencies 
into one *JAR-with-dependencies* 
+which we refer to as the *application JAR*. The application JAR can be 
submitted to an already running 
+Flink cluster, or added to a Flink application container image.
+
+Projects created from the `Java Project Template`, the `Scala Project 
Template`, or Gradle are configured 
+to automatically include the application dependencies into the application JAR 
when you run `mvn clean package`. 
+For projects that are not set up from those templates, we recommend adding the 
Maven Shade Plugin to 
+build the application jar with all required dependencies.
+
+**Important:** For Maven (and other build tools) to correctly package the 
dependencies into the application jar,
+these application dependencies must be specified in scope *compile* (unlike 
the core dependencies, which
+must be specified in

[GitHub] [flink] slinkydeveloper commented on pull request #13081: [FLINK-18590][json] Support json array explode to multi messages

2022-01-21 Thread GitBox


slinkydeveloper commented on pull request #13081:
URL: https://github.com/apache/flink/pull/13081#issuecomment-1018272859


   @poan0508 this PR is looking for someone to finalize it. Wanna take a crack 
at it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #18394: [FLINK-25520][Table SQL/API] Implement "ALTER TABLE ... COMPACT" SQL

2022-01-21 Thread GitBox


JingsongLi commented on a change in pull request #18394:
URL: https://github.com/apache/flink/pull/18394#discussion_r789435121



##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/utils/TableTestBase.scala
##
@@ -47,21 +53,30 @@ import org.apache.flink.table.expressions.Expression
 import org.apache.flink.table.factories.{FactoryUtil, PlannerFactoryUtil, 
StreamTableSourceFactory}
 import org.apache.flink.table.functions._
 import org.apache.flink.table.module.ModuleManager
-import org.apache.flink.table.operations.{ModifyOperation, Operation, 
QueryOperation, SinkModifyOperation}
+import org.apache.flink.table.operations.ModifyOperation
+import org.apache.flink.table.operations.Operation
+import org.apache.flink.table.operations.QueryOperation
+import org.apache.flink.table.operations.SinkModifyOperation

Review comment:
   Your scala style may be something wrong... can you check for Flink Scala 
style?

##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/SqlToOperationConverter.java
##
@@ -572,14 +574,19 @@ private Operation convertAlterTableReset(
 return new AlterTableOptionsOperation(tableIdentifier, 
oldTable.copy(newOptions));
 }
 
-private Operation convertAlterTableCompact(
+/**
+ * Convert `ALTER TABLE ... COMPACT` operation to {@link ModifyOperation} 
for Flink's managed
+ * table to trigger a compaction batch job.
+ */
+private ModifyOperation convertAlterTableCompact(
 ObjectIdentifier tableIdentifier,
-ResolvedCatalogTable resolvedCatalogTable,
+ContextResolvedTable contextResolvedTable,
 SqlAlterTableCompact alterTableCompact) {
 Catalog catalog = 
catalogManager.getCatalog(tableIdentifier.getCatalogName()).orElse(null);
+ResolvedCatalogTable resolvedCatalogTable = 
contextResolvedTable.getResolvedTable();
 if (ManagedTableListener.isManagedTable(catalog, 
resolvedCatalogTable)) {
-LinkedHashMap partitionKVs = 
alterTableCompact.getPartitionKVs();
-CatalogPartitionSpec partitionSpec = null;
+Map partitionKVs = 
alterTableCompact.getPartitionKVs();
+CatalogPartitionSpec partitionSpec = new 
CatalogPartitionSpec(Collections.emptyMap());
 if (partitionKVs != null) {
 List orderedPartitionKeys = 
resolvedCatalogTable.getPartitionKeys();

Review comment:
   Minor: partitionKeys, no need to `orderedPartitionKeys`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on pull request #18405: [FLINK-25683][streaming-java] wrong result if table transfrom to Data…

2022-01-21 Thread GitBox


dawidwys commented on pull request #18405:
URL: https://github.com/apache/flink/pull/18405#issuecomment-1018277879


   Have you tried building Flink from the command line? Usually that helps with 
any auto generated files, at least that's what I do.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25276) FLIP-182: Support native and incremental savepoints

2022-01-21 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-25276:
---
Summary: FLIP-182: Support native and incremental savepoints  (was: Support 
native and incremental savepoints)

> FLIP-182: Support native and incremental savepoints
> ---
>
> Key: FLINK-25276
> URL: https://issues.apache.org/jira/browse/FLINK-25276
> Project: Flink
>  Issue Type: New Feature
>Reporter: Piotr Nowojski
>Priority: Major
>
> Motivation. Currently with non incremental canonical format savepoints, with 
> very large state, both taking and recovery from savepoints can take very long 
> time. Providing options to take native format and incremental savepoint would 
> alleviate this problem.
> In the past the main challenge lied in the ownership semantic and files clean 
> up of such incremental savepoints. However with FLINK-25154 implemented some 
> of those concerns can be solved. Incremental savepoint could leverage "force 
> full snapshot" mode provided by FLINK-25192, to duplicate/copy all of the 
> savepoint files out of the Flink's ownership scope.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25744) Support native savepoints (w/o modifying the statebackend specific snapshot strategies)

2022-01-21 Thread Piotr Nowojski (Jira)
Piotr Nowojski created FLINK-25744:
--

 Summary: Support native savepoints (w/o modifying the statebackend 
specific snapshot strategies)
 Key: FLINK-25744
 URL: https://issues.apache.org/jira/browse/FLINK-25744
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing
Reporter: Piotr Nowojski
Assignee: Dawid Wysakowicz


For example w/o incremental RocksDB support. But HashMap and Full RocksDB 
should be working out of the box w/o extra changes.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25276) FLIP-182: Support native and incremental savepoints

2022-01-21 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-25276:
---
Affects Version/s: 1.14.3

> FLIP-182: Support native and incremental savepoints
> ---
>
> Key: FLINK-25276
> URL: https://issues.apache.org/jira/browse/FLINK-25276
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.14.3
>Reporter: Piotr Nowojski
>Priority: Major
>
> Motivation. Currently with non incremental canonical format savepoints, with 
> very large state, both taking and recovery from savepoints can take very long 
> time. Providing options to take native format and incremental savepoint would 
> alleviate this problem.
> In the past the main challenge lied in the ownership semantic and files clean 
> up of such incremental savepoints. However with FLINK-25154 implemented some 
> of those concerns can be solved. Incremental savepoint could leverage "force 
> full snapshot" mode provided by FLINK-25192, to duplicate/copy all of the 
> savepoint files out of the Flink's ownership scope.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25276) FLIP-182: Support native and incremental savepoints

2022-01-21 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-25276:
---
Component/s: Runtime / Checkpointing

> FLIP-182: Support native and incremental savepoints
> ---
>
> Key: FLINK-25276
> URL: https://issues.apache.org/jira/browse/FLINK-25276
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.3
>Reporter: Piotr Nowojski
>Priority: Major
>
> Motivation. Currently with non incremental canonical format savepoints, with 
> very large state, both taking and recovery from savepoints can take very long 
> time. Providing options to take native format and incremental savepoint would 
> alleviate this problem.
> In the past the main challenge lied in the ownership semantic and files clean 
> up of such incremental savepoints. However with FLINK-25154 implemented some 
> of those concerns can be solved. Incremental savepoint could leverage "force 
> full snapshot" mode provided by FLINK-25192, to duplicate/copy all of the 
> savepoint files out of the Flink's ownership scope.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25276) FLIP-182: Support native and incremental savepoints

2022-01-21 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-25276:
---
Fix Version/s: 1.15.0

> FLIP-182: Support native and incremental savepoints
> ---
>
> Key: FLINK-25276
> URL: https://issues.apache.org/jira/browse/FLINK-25276
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.3
>Reporter: Piotr Nowojski
>Priority: Major
> Fix For: 1.15.0
>
>
> Motivation. Currently with non incremental canonical format savepoints, with 
> very large state, both taking and recovery from savepoints can take very long 
> time. Providing options to take native format and incremental savepoint would 
> alleviate this problem.
> In the past the main challenge lied in the ownership semantic and files clean 
> up of such incremental savepoints. However with FLINK-25154 implemented some 
> of those concerns can be solved. Incremental savepoint could leverage "force 
> full snapshot" mode provided by FLINK-25192, to duplicate/copy all of the 
> savepoint files out of the Flink's ownership scope.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25745) Support RocksDB incremental native savepoints

2022-01-21 Thread Piotr Nowojski (Jira)
Piotr Nowojski created FLINK-25745:
--

 Summary: Support RocksDB incremental native savepoints
 Key: FLINK-25745
 URL: https://issues.apache.org/jira/browse/FLINK-25745
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / State Backends
Reporter: Piotr Nowojski
 Fix For: 1.15.0


Respect CheckpointType.SharingFilesStrategy#NO_SHARING flag in 
RocksIncrementalSnapshotStrategy. We also need to make sure that 
RocksDBIncrementalSnapshotStrategy is creating self contained/relocatable 
snapshots (using CheckpointedStateScope#EXCLUSIVE for native savepoints)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18324: [FLINK-25557][checkpoint] Introduce incremental/full checkpoint size stats

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18324:
URL: https://github.com/apache/flink/pull/18324#issuecomment-1009752905


   
   ## CI report:
   
   * 647c5b7e76e310ff363a31eb9de04c544f2effd9 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29852)
 
   * e53e06e0f6e692252e3e87b4fe797fb306c297ae Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29865)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas closed pull request #15729: [FLINK-22234][runtime] Read savepoint before creating ExecutionGraph

2022-01-21 Thread GitBox


SteNicholas closed pull request #15729:
URL: https://github.com/apache/flink/pull/15729


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas closed pull request #13298: [FLINK-19038][table] It doesn't support to call Table.limit() continuously

2022-01-21 Thread GitBox


SteNicholas closed pull request #13298:
URL: https://github.com/apache/flink/pull/13298


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas closed pull request #14028: [FLINK-20020][client] Make UnsuccessfulExecutionException part of the JobClient.getJobExecutionResult() contract

2022-01-21 Thread GitBox


SteNicholas closed pull request #14028:
URL: https://github.com/apache/flink/pull/14028


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18137: [FLINK-25287][connector-testing-framework] Refactor interfaces of connector testing framework to support more scenarios

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18137:
URL: https://github.com/apache/flink/pull/18137#issuecomment-996615034


   
   ## CI report:
   
   * 66280958ba0046cbef940e575fc61a4ec6d62253 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29747)
 
   * d3a3caf926f972fa7b83edc1d66d9883aba15376 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29853)
 
   * f67c40496c3c04c45f07a8e49ccf7413e5854244 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29862)
 
   * 1123a75a23288a4161b08133a6cdcc71030b384e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul commented on a change in pull request #18397: [FLINK-25702][Kafka] Use the configure feature provided by the kafka Serializer/Deserializer.

2022-01-21 Thread GitBox


fapaul commented on a change in pull request #18397:
URL: https://github.com/apache/flink/pull/18397#discussion_r789453203



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/deserializer/KafkaValueOnlyDeserializerWrapper.java
##
@@ -62,8 +75,15 @@ public void open(DeserializationSchema.InitializationContext 
context) throws Exc
 deserializerClass.getName(),
 Deserializer.class,
 getClass().getClassLoader());
+
+if (config.isEmpty()) {
+return;
+}
+
 if (deserializer instanceof Configurable) {
 ((Configurable) deserializer).configure(config);
+} else {

Review comment:
   Currently, we have the following scenarios:
   
   1. `De/Serializer implements Configurable` I would **only** call 
`Configurable.configure`
   2. Does not implement Configurable but a configuration is given call 
`De/Serializer.configure`
   3. No configuration is given we call `De/Serializer.configure` with an empty 
map?
   
   Probably you have to write tests for all of them although I am not fully 
sure about the last. I guess calling `configure` with an empty map does not 
hurt.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18137: [FLINK-25287][connector-testing-framework] Refactor interfaces of connector testing framework to support more scenarios

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18137:
URL: https://github.com/apache/flink/pull/18137#issuecomment-996615034


   
   ## CI report:
   
   * d3a3caf926f972fa7b83edc1d66d9883aba15376 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29853)
 
   * f67c40496c3c04c45f07a8e49ccf7413e5854244 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29862)
 
   * 1123a75a23288a4161b08133a6cdcc71030b384e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29869)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18437: [FLINK-25712][connector/tests] Merge flink-connector-testing into flink-connector-test-utils

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18437:
URL: https://github.com/apache/flink/pull/18437#issuecomment-1018174136


   
   ## CI report:
   
   * be3d03d337bb7358ee949445f0530a73d02c43dc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29854)
 
   * 049a0ebc6535291170b03e739521344d54809682 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29863)
 
   * 66184529bb0e4f34be6b7e3755d06cd50939f894 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18436: [BP-1.13][FLINK-24334][k8s] Set FLINK_LOG_DIR environment for JobManager and TaskManager pod if configured via options

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18436:
URL: https://github.com/apache/flink/pull/18436#issuecomment-1018094327


   
   ## CI report:
   
   * f87a46d760404ab5d51b2d5ed554459d3b07fde6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29845)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18437: [FLINK-25712][connector/tests] Merge flink-connector-testing into flink-connector-test-utils

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18437:
URL: https://github.com/apache/flink/pull/18437#issuecomment-1018174136


   
   ## CI report:
   
   * be3d03d337bb7358ee949445f0530a73d02c43dc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29854)
 
   * 049a0ebc6535291170b03e739521344d54809682 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29863)
 
   * 66184529bb0e4f34be6b7e3755d06cd50939f894 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29870)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24623) Prevent usage of EventTimeWindows when EventTime is disabled

2022-01-21 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479905#comment-17479905
 ] 

Alexander Fedulov commented on FLINK-24623:
---

[~Dario] Hi Dario, could you propose your approach in a form of a PR?

> Prevent usage of EventTimeWindows when EventTime is disabled
> 
>
> Key: FLINK-24623
> URL: https://issues.apache.org/jira/browse/FLINK-24623
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Dario Heinisch
>Priority: Not a Priority
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Having the following stream will never process values after the windowing as 
> event time based has been disabled via the Watermark strategy:
> {code:java}
> public class PlaygroundJob {
>  public static void main(String[] args) throws Exception {
>  StreamExecutionEnvironment env =
> StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new
> Configuration()); DataStreamSource> source =
> env.addSource(new SourceFunction>() {
>  @Override
>  public void run(SourceContext>
> sourceContext) throws Exception {
>  int i = 0;
>  while (true) {
>  Tuple2 tuple =
> Tuple2.of(System.currentTimeMillis(), i++ % 10);
>  sourceContext.collect(tuple);
>  }
>  } @Override
>  public void cancel() {
>  } }); source.assignTimestampsAndWatermarks(
>  // Switch noWatermarks() to forMonotonousTimestamps()
>  // and values are being printed.
>  WatermarkStrategy.>noWatermarks()
>  .withTimestampAssigner((t, timestamp) -> t.f0)
>  ).keyBy(t -> t.f1)
> .window(TumblingEventTimeWindows.of(Time.seconds(1)))
>  .process(new ProcessWindowFunction Integer>, String, Integer, TimeWindow>() {
>  @Override
>  public void process(Integer key, Context context,
> Iterable> iterable, Collector out) throws
> Exception {
>  int count = 0;
>  Iterator> iter =
> iterable.iterator();
>  while (iter.hasNext()) {
>  count++;
>  iter.next();
>  } out.collect("Key: " + key 
> + " count: " + count); }
>  }).print(); env.execute();
>  }
> }{code}
>  
> The issue is that the stream makes use of _noWatermarks()_ which effectively 
> disables any event time windowing. 
> As this pipeline can never process values it is faulty and Flink should throw 
> an Exception when starting up. 
>  
> 
> Proposed change:
> We extend the interface 
> [WatermarkStrategy|https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/WatermarkStrategy.java#L55]
>  with the method _boolean isEventTime()_.
> We create a new class named _EventTimeWindowPreconditions_ and add the 
> following method to it where we make use of _isEventTime()_:
>  
> {code:java}
> public static void hasPrecedingEventTimeGenerator(final 
> List> predecessors) {
> for (int i = predecessors.size() - 1; i >= 0; i--) {
> final Transformation pre = predecessors.get(i);
> if (pre instanceof TimestampsAndWatermarksTransformation) {
> TimestampsAndWatermarksTransformation 
> timestampsAndWatermarksTransformation =
> (TimestampsAndWatermarksTransformation) pre;
> final WatermarkStrategy waStrat = 
> timestampsAndWatermarksTransformation.getWatermarkStrategy();
> // assert that it generates timestamps or throw exception
> if (!waStrat.isEventTime()) {
> // TODO: Custom exception
> throw new IllegalArgumentException(
> "Cannot use an EventTime window with a preceding 
> water mark generator which"
> + " does not ingest event times. Did you use 
> noWatermarks() as the WatermarkStrategy"
> + " and used EventTime windows such as 
> SlidingEventTimeWindows/SlidingEventTimeWindows ?"
> + " These windows will never window any 
> values as your stream does not support event time"
> );
> }
> // We have to terminate the check now as we have found the first 
> most recent
> // timestamp assigner for this window and ensured that it 
> actually adds event
> // time stamps. If there has been previously 

[jira] [Comment Edited] (FLINK-24623) Prevent usage of EventTimeWindows when EventTime is disabled

2022-01-21 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479905#comment-17479905
 ] 

Alexander Fedulov edited comment on FLINK-24623 at 1/21/22, 8:47 AM:
-

[~Dario] Hi Dario, could you propose your approach as a PR?


was (Author: afedulov):
[~Dario] Hi Dario, could you propose your approach in a form of a PR?

> Prevent usage of EventTimeWindows when EventTime is disabled
> 
>
> Key: FLINK-24623
> URL: https://issues.apache.org/jira/browse/FLINK-24623
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Dario Heinisch
>Priority: Not a Priority
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Having the following stream will never process values after the windowing as 
> event time based has been disabled via the Watermark strategy:
> {code:java}
> public class PlaygroundJob {
>  public static void main(String[] args) throws Exception {
>  StreamExecutionEnvironment env =
> StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new
> Configuration()); DataStreamSource> source =
> env.addSource(new SourceFunction>() {
>  @Override
>  public void run(SourceContext>
> sourceContext) throws Exception {
>  int i = 0;
>  while (true) {
>  Tuple2 tuple =
> Tuple2.of(System.currentTimeMillis(), i++ % 10);
>  sourceContext.collect(tuple);
>  }
>  } @Override
>  public void cancel() {
>  } }); source.assignTimestampsAndWatermarks(
>  // Switch noWatermarks() to forMonotonousTimestamps()
>  // and values are being printed.
>  WatermarkStrategy.>noWatermarks()
>  .withTimestampAssigner((t, timestamp) -> t.f0)
>  ).keyBy(t -> t.f1)
> .window(TumblingEventTimeWindows.of(Time.seconds(1)))
>  .process(new ProcessWindowFunction Integer>, String, Integer, TimeWindow>() {
>  @Override
>  public void process(Integer key, Context context,
> Iterable> iterable, Collector out) throws
> Exception {
>  int count = 0;
>  Iterator> iter =
> iterable.iterator();
>  while (iter.hasNext()) {
>  count++;
>  iter.next();
>  } out.collect("Key: " + key 
> + " count: " + count); }
>  }).print(); env.execute();
>  }
> }{code}
>  
> The issue is that the stream makes use of _noWatermarks()_ which effectively 
> disables any event time windowing. 
> As this pipeline can never process values it is faulty and Flink should throw 
> an Exception when starting up. 
>  
> 
> Proposed change:
> We extend the interface 
> [WatermarkStrategy|https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/WatermarkStrategy.java#L55]
>  with the method _boolean isEventTime()_.
> We create a new class named _EventTimeWindowPreconditions_ and add the 
> following method to it where we make use of _isEventTime()_:
>  
> {code:java}
> public static void hasPrecedingEventTimeGenerator(final 
> List> predecessors) {
> for (int i = predecessors.size() - 1; i >= 0; i--) {
> final Transformation pre = predecessors.get(i);
> if (pre instanceof TimestampsAndWatermarksTransformation) {
> TimestampsAndWatermarksTransformation 
> timestampsAndWatermarksTransformation =
> (TimestampsAndWatermarksTransformation) pre;
> final WatermarkStrategy waStrat = 
> timestampsAndWatermarksTransformation.getWatermarkStrategy();
> // assert that it generates timestamps or throw exception
> if (!waStrat.isEventTime()) {
> // TODO: Custom exception
> throw new IllegalArgumentException(
> "Cannot use an EventTime window with a preceding 
> water mark generator which"
> + " does not ingest event times. Did you use 
> noWatermarks() as the WatermarkStrategy"
> + " and used EventTime windows such as 
> SlidingEventTimeWindows/SlidingEventTimeWindows ?"
> + " These windows will never window any 
> values as your stream does not support event time"
> );
> }
> // We have to terminate the check now as we have found the first 
> most recent
> // 

[GitHub] [flink] dannycranmer commented on pull request #18421: [FLINK-25731][connectors/kinesis] Deprecated FlinkKinesisConsumer / F…

2022-01-21 Thread GitBox


dannycranmer commented on pull request #18421:
URL: https://github.com/apache/flink/pull/18421#issuecomment-1018302126


   LGTM, merging


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dannycranmer merged pull request #18421: [FLINK-25731][connectors/kinesis] Deprecated FlinkKinesisConsumer / F…

2022-01-21 Thread GitBox


dannycranmer merged pull request #18421:
URL: https://github.com/apache/flink/pull/18421


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on a change in pull request #18324: [FLINK-25557][checkpoint] Introduce incremental/full checkpoint size stats

2022-01-21 Thread GitBox


rkhachatryan commented on a change in pull request #18324:
URL: https://github.com/apache/flink/pull/18324#discussion_r789465033



##
File path: 
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/ChangelogKeyedStateBackend.java
##
@@ -368,19 +372,34 @@ public boolean 
deregisterKeySelectionListener(KeySelectionListener listener)
 // collections don't change once started and handles are immutable
 List prevDeltaCopy =
 new 
ArrayList<>(changelogStateBackendStateCopy.getRestoredNonMaterialized());
+long incrementalMaterializeSize = 0L;
 if (delta != null && delta.getStateSize() > 0) {
 prevDeltaCopy.add(delta);
+incrementalMaterializeSize += delta.getIncrementalStateSize();
 }
 
 if (prevDeltaCopy.isEmpty()
 && 
changelogStateBackendStateCopy.getMaterializedSnapshot().isEmpty()) {
 return SnapshotResult.empty();
 } else {
+List materializedSnapshot =
+changelogStateBackendStateCopy.getMaterializedSnapshot();
+for (KeyedStateHandle keyedStateHandle : materializedSnapshot) {
+if (!lastCompletedHandles.contains(keyedStateHandle)) {
+incrementalMaterializeSize += 
keyedStateHandle.getStateSize();

Review comment:
   The data uploaded during the async phase is (usually) created during the 
sync phase. So  "Async Persist Checkpoint Data Size" is not very precise. The 
current UI does distinguish duration of sync and async phases; also nothing 
prevents backend from persisting everything during the sync phase.
   
   Something like "Foreground persist data size" would be more precise, but it 
would confuse non-changelog users I guess. WDYT?
   
   So maybe "Sync/async Persist Checkpoint Data Size"?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)
Jane Chan created FLINK-25746:
-

 Summary: Failed to run ITCase locally with IDEA under flink-orc 
and flink-parquet module
 Key: FLINK-25746
 URL: https://issues.apache.org/jira/browse/FLINK-25746
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: Jane Chan


Recently, it has been observed that several integration test cases failed when 
running from IDEA locally, but running them from the maven command line is OK.
h4. How to reproduce
{code:java}
// switch to master branch
git fetch origin
git rebase origin/master
mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
{code}

Then run the following tests from IntelliJ IDEA
h4. The affected tests
{code:java}
org.apache.flink.orc.OrcFileSystemITCase
org.apache.flink.orc.OrcFsStreamingSinkITCase
org.apache.flink.formats.parquet.ParquetFileCompactionITCase
org.apache.flink.formats.parquet.ParquetFileSystemITCase
org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
h4. The stack trace
{code:java}
java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
    at 
org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
    at org.apache.calcite.util.Util.(Util.java:152)
    at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
    at 
org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
    at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
    at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
    at 
org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
    at 
org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
    at 
org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
    at 
org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
    at 
org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
    at org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
    at 
org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
    at 
org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
    at 
org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
    at 
org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
    at 
org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
    at 
org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
    at 
org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
    at 
org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
    at 
org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    at 
org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
    at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.google.common.base.MoreObjects
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 38 more
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.cal

[jira] [Updated] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-25746:
--
Attachment: image-2022-01-21-16-54-12-354.png

> Failed to run ITCase locally with IDEA under flink-orc and flink-parquet 
> module
> ---
>
> Key: FLINK-25746
> URL: https://issues.apache.org/jira/browse/FLINK-25746
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Jane Chan
>Priority: Major
> Attachments: image-2022-01-21-16-54-12-354.png
>
>
> Recently, it has been observed that several integration test cases failed 
> when running from IDEA locally, but running them from the maven command line 
> is OK.
> h4. How to reproduce
> {code:java}
> // switch to master branch
> git fetch origin
> git rebase origin/master
> mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
> {code}
> Then run the following tests from IntelliJ IDEA
> h4. The affected tests
> {code:java}
> org.apache.flink.orc.OrcFileSystemITCase
> org.apache.flink.orc.OrcFsStreamingSinkITCase
> org.apache.flink.formats.parquet.ParquetFileCompactionITCase
> org.apache.flink.formats.parquet.ParquetFileSystemITCase
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
> h4. The stack trace
> !image-2022-01-21-16-54-12-354.png!
> {code:java}
> java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
> org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
>     at 
> org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
>     at org.apache.calcite.util.Util.(Util.java:152)
>     at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
>     at 
> org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
>     at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
>     at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
>     at 
> org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
>     at 
> org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
>     at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
>     at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
>     at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
>     at 
> org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
>     at 
> org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.

[jira] [Updated] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-25746:
--
Description: 
Recently, it has been observed that several integration test cases failed when 
running from IDEA locally, but running them from the maven command line is OK.
h4. How to reproduce
{code:java}
// switch to master branch
git fetch origin
git rebase origin/master
mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
{code}
Then run the following tests from IntelliJ IDEA
h4. The affected tests
{code:java}
org.apache.flink.orc.OrcFileSystemITCase
org.apache.flink.orc.OrcFsStreamingSinkITCase
org.apache.flink.formats.parquet.ParquetFileCompactionITCase
org.apache.flink.formats.parquet.ParquetFileSystemITCase
org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
h4. The stack trace

!image-2022-01-21-16-54-12-354.png!
{code:java}
java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
    at 
org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
    at org.apache.calcite.util.Util.(Util.java:152)
    at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
    at 
org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
    at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
    at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
    at 
org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
    at 
org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
    at 
org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
    at 
org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
    at 
org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
    at org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
    at 
org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
    at 
org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
    at 
org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
    at 
org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
    at 
org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
    at 
org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
    at 
org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
    at 
org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
    at 
org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    at 
org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
    at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.google.common.base.MoreObjects
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 38 more
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.calcite.sql2rel.StandardConvertletTable    at 
org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
    at 
org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.ja

[GitHub] [flink] flinkbot edited a comment on pull request #18153: [FLINK-25568][connectors/elasticsearch] Add Elasticsearch 7 Source

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18153:
URL: https://github.com/apache/flink/pull/18153#issuecomment-997756404


   
   ## CI report:
   
   * b29c358b51b4a803f129eab3ad8723747510d3c0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29791)
 
   * dc3cdf818723ec45e61511b4ed1e0b08cabbff29 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18437: [FLINK-25712][connector/tests] Merge flink-connector-testing into flink-connector-test-utils

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18437:
URL: https://github.com/apache/flink/pull/18437#issuecomment-1018174136


   
   ## CI report:
   
   * be3d03d337bb7358ee949445f0530a73d02c43dc Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29854)
 
   * 049a0ebc6535291170b03e739521344d54809682 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29863)
 
   * 66184529bb0e4f34be6b7e3755d06cd50939f894 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29870)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-25746:
--
Attachment: image-2022-01-21-16-56-42-156.png

> Failed to run ITCase locally with IDEA under flink-orc and flink-parquet 
> module
> ---
>
> Key: FLINK-25746
> URL: https://issues.apache.org/jira/browse/FLINK-25746
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Jane Chan
>Priority: Major
> Attachments: image-2022-01-21-16-54-12-354.png, 
> image-2022-01-21-16-56-42-156.png
>
>
> Recently, it has been observed that several integration test cases failed 
> when running from IDEA locally, but running them from the maven command line 
> is OK.
> h4. How to reproduce
> {code:java}
> // switch to master branch
> git fetch origin
> git rebase origin/master
> mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
> {code}
> Then run the following tests from IntelliJ IDEA
> h4. The affected tests
> {code:java}
> org.apache.flink.orc.OrcFileSystemITCase
> org.apache.flink.orc.OrcFsStreamingSinkITCase
> org.apache.flink.formats.parquet.ParquetFileCompactionITCase
> org.apache.flink.formats.parquet.ParquetFileSystemITCase
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
> h4. The stack trace
> !image-2022-01-21-16-54-12-354.png!
> {code:java}
> java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
> org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
>     at 
> org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
>     at org.apache.calcite.util.Util.(Util.java:152)
>     at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
>     at 
> org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
>     at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
>     at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
>     at 
> org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
>     at 
> org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
>     at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
>     at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
>     at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
>     at 
> org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
>     at 
> org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout

[jira] [Commented] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479909#comment-17479909
 ] 

Jane Chan commented on FLINK-25746:
---

While it's ok when running tests from the command line
{code:java}

mvn test -Dtest=ParquetFileCompactionITCase {code}
!image-2022-01-21-16-56-42-156.png!

 

> Failed to run ITCase locally with IDEA under flink-orc and flink-parquet 
> module
> ---
>
> Key: FLINK-25746
> URL: https://issues.apache.org/jira/browse/FLINK-25746
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Jane Chan
>Priority: Major
> Attachments: image-2022-01-21-16-54-12-354.png, 
> image-2022-01-21-16-56-42-156.png
>
>
> Recently, it has been observed that several integration test cases failed 
> when running from IDEA locally, but running them from the maven command line 
> is OK.
> h4. How to reproduce
> {code:java}
> // switch to master branch
> git fetch origin
> git rebase origin/master
> mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
> {code}
> Then run the following tests from IntelliJ IDEA
> h4. The affected tests
> {code:java}
> org.apache.flink.orc.OrcFileSystemITCase
> org.apache.flink.orc.OrcFsStreamingSinkITCase
> org.apache.flink.formats.parquet.ParquetFileCompactionITCase
> org.apache.flink.formats.parquet.ParquetFileSystemITCase
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
> h4. The stack trace
> !image-2022-01-21-16-54-12-354.png!
> {code:java}
> java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
> org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
>     at 
> org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
>     at org.apache.calcite.util.Util.(Util.java:152)
>     at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
>     at 
> org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
>     at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
>     at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
>     at 
> org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
>     at 
> org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
>     at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
>     at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
>     at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
>     at 
> org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
>     at 
> org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at

[jira] [Updated] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-25746:
--
Affects Version/s: 1.15.0

> Failed to run ITCase locally with IDEA under flink-orc and flink-parquet 
> module
> ---
>
> Key: FLINK-25746
> URL: https://issues.apache.org/jira/browse/FLINK-25746
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.15.0
>Reporter: Jane Chan
>Priority: Major
> Attachments: image-2022-01-21-16-54-12-354.png, 
> image-2022-01-21-16-56-42-156.png
>
>
> Recently, it has been observed that several integration test cases failed 
> when running from IDEA locally, but running them from the maven command line 
> is OK.
> h4. How to reproduce
> {code:java}
> // switch to master branch
> git fetch origin
> git rebase origin/master
> mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
> {code}
> Then run the following tests from IntelliJ IDEA
> h4. The affected tests
> {code:java}
> org.apache.flink.orc.OrcFileSystemITCase
> org.apache.flink.orc.OrcFsStreamingSinkITCase
> org.apache.flink.formats.parquet.ParquetFileCompactionITCase
> org.apache.flink.formats.parquet.ParquetFileSystemITCase
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
> h4. The stack trace
> !image-2022-01-21-16-54-12-354.png!
> {code:java}
> java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
> org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
>     at 
> org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
>     at org.apache.calcite.util.Util.(Util.java:152)
>     at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
>     at 
> org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
>     at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
>     at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
>     at 
> org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
>     at 
> org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
>     at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
>     at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
>     at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
>     at 
> org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
>     at 
> org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>     at 
> org.junit.internal.runners.statements.Fai

[GitHub] [flink] flinkbot edited a comment on pull request #18153: [FLINK-25568][connectors/elasticsearch] Add Elasticsearch 7 Source

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18153:
URL: https://github.com/apache/flink/pull/18153#issuecomment-997756404


   
   ## CI report:
   
   * b29c358b51b4a803f129eab3ad8723747510d3c0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29791)
 
   * dc3cdf818723ec45e61511b4ed1e0b08cabbff29 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29872)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25747) UdfStreamOperatorCheckpointingITCase hangs on AZP

2022-01-21 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-25747:
-

 Summary: UdfStreamOperatorCheckpointingITCase hangs on AZP
 Key: FLINK-25747
 URL: https://issues.apache.org/jira/browse/FLINK-25747
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.15.0
Reporter: Till Rohrmann


The test {{UdfStreamOperatorCheckpointingITCase}} hangs on AZP.

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29840&view=logs&j=b0a398c0-685b-599c-eb57-c8c2a771138e&t=d13f554f-d4b9-50f8-30ee-d49c6fb0b3cc&l=15424



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18268: [FLINK-14902][connector] Supports jdbc async lookup join

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18268:
URL: https://github.com/apache/flink/pull/18268#issuecomment-1005479356


   
   ## CI report:
   
   * fe7d298f1ac142a2fa53241df09eaf93b44ec4be Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29290)
 
   * f64a2fe9f7c1ca9a041641256dc09d00253ce837 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479923#comment-17479923
 ] 

Jane Chan commented on FLINK-25746:
---

h4. Some observations

The dependency tree is listed below.

Note that although guava is under classpath,  neither the version is 11.0.2 
which does not have the class MoreObjects or it is shaded.

 
{code:java}
[INFO] --- maven-dependency-plugin:3.2.0:tree (default-cli) @ flink-parquet ---
[INFO] org.apache.flink:flink-parquet:jar:1.15-SNAPSHOT
[INFO] +- org.apache.flink:flink-core:jar:1.15-SNAPSHOT:provided
[INFO] |  +- org.apache.flink:flink-annotations:jar:1.15-SNAPSHOT:provided
[INFO] |  +- org.apache.flink:flink-metrics-core:jar:1.15-SNAPSHOT:provided
[INFO] |  +- org.apache.flink:flink-shaded-asm-7:jar:7.1-14.0:provided
[INFO] |  +- org.apache.commons:commons-lang3:jar:3.3.2:provided
[INFO] |  +- com.esotericsoftware.kryo:kryo:jar:2.24.0:provided
[INFO] |  |  \- com.esotericsoftware.minlog:minlog:jar:1.2:provided
[INFO] |  +- commons-collections:commons-collections:jar:3.2.2:provided
[INFO] |  +- org.apache.commons:commons-compress:jar:1.21:compile
[INFO] |  \- org.apache.flink:flink-shaded-guava:jar:30.1.1-jre-14.0:provided
[INFO] +- org.apache.flink:flink-table-common:jar:1.15-SNAPSHOT:provided 
(optional) 
[INFO] |  \- com.ibm.icu:icu4j:jar:67.1:provided (optional) 
[INFO] +- org.apache.flink:flink-avro:jar:1.15-SNAPSHOT:compile (optional) 
[INFO] |  \- org.apache.avro:avro:jar:1.10.0:compile
[INFO] |     +- com.fasterxml.jackson.core:jackson-core:jar:2.13.0:compile
[INFO] |     \- com.fasterxml.jackson.core:jackson-databind:jar:2.13.0:compile
[INFO] |        \- 
com.fasterxml.jackson.core:jackson-annotations:jar:2.13.0:compile
[INFO] +- org.apache.flink:flink-connector-files:jar:1.15-SNAPSHOT:provided 
(optional) 
[INFO] |  \- org.apache.flink:flink-file-sink-common:jar:1.15-SNAPSHOT:provided
[INFO] +- org.apache.parquet:parquet-hadoop:jar:1.12.2:compile
[INFO] |  +- org.apache.parquet:parquet-column:jar:1.12.2:compile
[INFO] |  |  \- org.apache.parquet:parquet-encoding:jar:1.12.2:compile
[INFO] |  +- org.apache.parquet:parquet-format-structures:jar:1.12.2:compile
[INFO] |  |  \- javax.annotation:javax.annotation-api:jar:1.3.2:compile
[INFO] |  +- org.apache.parquet:parquet-jackson:jar:1.12.2:compile
[INFO] |  +- commons-pool:commons-pool:jar:1.6:compile
[INFO] |  \- com.github.luben:zstd-jni:jar:1.4.9-1:compile
[INFO] +- org.apache.hadoop:hadoop-common:jar:2.8.5:provided
[INFO] |  +- org.apache.hadoop:hadoop-annotations:jar:2.8.5:provided
[INFO] |  +- com.google.guava:guava:jar:11.0.2:compile
[INFO] |  +- commons-cli:commons-cli:jar:1.5.0:provided
[INFO] |  +- org.apache.commons:commons-math3:jar:3.6.1:provided
[INFO] |  +- xmlenc:xmlenc:jar:0.52:provided
[INFO] |  +- org.apache.httpcomponents:httpclient:jar:4.5.13:compile
[INFO] |  |  \- org.apache.httpcomponents:httpcore:jar:4.4.14:compile
[INFO] |  +- commons-codec:commons-codec:jar:1.15:compile
[INFO] |  +- commons-io:commons-io:jar:2.11.0:provided
[INFO] |  +- commons-net:commons-net:jar:3.1:provided
[INFO] |  +- javax.servlet:servlet-api:jar:2.5:compile
[INFO] |  +- org.mortbay.jetty:jetty:jar:6.1.26:provided
[INFO] |  +- org.mortbay.jetty:jetty-util:jar:6.1.26:provided
[INFO] |  +- org.mortbay.jetty:jetty-sslengine:jar:6.1.26:provided
[INFO] |  +- javax.servlet.jsp:jsp-api:jar:2.1:provided
[INFO] |  +- com.sun.jersey:jersey-core:jar:1.9:provided
[INFO] |  +- com.sun.jersey:jersey-json:jar:1.9:provided
[INFO] |  |  +- org.codehaus.jettison:jettison:jar:1.1:provided
[INFO] |  |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:provided
[INFO] |  |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:provided
[INFO] |  |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:provided
[INFO] |  +- com.sun.jersey:jersey-server:jar:1.9:provided
[INFO] |  |  \- asm:asm:jar:3.1:provided
[INFO] |  +- commons-logging:commons-logging:jar:1.1.3:compile
[INFO] |  +- net.java.dev.jets3t:jets3t:jar:0.9.0:provided
[INFO] |  |  \- com.jamesmurty.utils:java-xmlbuilder:jar:0.4:provided
[INFO] |  +- commons-lang:commons-lang:jar:2.6:compile
[INFO] |  +- commons-configuration:commons-configuration:jar:1.7:provided
[INFO] |  |  +- commons-digester:commons-digester:jar:1.8.1:provided
[INFO] |  |  \- commons-beanutils:commons-beanutils:jar:1.8.3:provided
[INFO] |  +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:provided
[INFO] |  +- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:provided
[INFO] |  +- com.google.code.gson:gson:jar:2.2.4:provided
[INFO] |  +- org.apache.hadoop:hadoop-auth:jar:2.8.5:provided
[INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:provided
[INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:provided
[INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:provided (version selected 
from constraint [1.3.1,2.3])
[INFO] |  |  |     \- net.minidev:accessors-smart:jar:1.2:provided
[INFO] | 

[jira] [Commented] (FLINK-24119) KafkaITCase.testTimestamps fails due to "Topic xxx already exist"

2022-01-21 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479924#comment-17479924
 ] 

Till Rohrmann commented on FLINK-24119:
---

Similar problem for 
{{KafkaShuffleExactlyOnceITCase.testFailureRecoveryEventTime}}: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29839&view=logs&j=4be4ed2b-549a-533d-aa33-09e28e360cc8&t=f7d83ad5-3324-5307-0eb3-819065cdcb38&l=8573

> KafkaITCase.testTimestamps fails due to "Topic xxx already exist"
> -
>
> Key: FLINK-24119
> URL: https://issues.apache.org/jira/browse/FLINK-24119
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Assignee: Fabian Paul
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23328&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=15a22db7-8faa-5b34-3920-d33c9f0ca23c&l=7419
> {code}
> Sep 01 15:53:20 [ERROR] Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 162.65 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase
> Sep 01 15:53:20 [ERROR] testTimestamps  Time elapsed: 23.237 s  <<< FAILURE!
> Sep 01 15:53:20 java.lang.AssertionError: Create test topic : tstopic failed, 
> org.apache.kafka.common.errors.TopicExistsException: Topic 'tstopic' already 
> exists.
> Sep 01 15:53:20   at org.junit.Assert.fail(Assert.java:89)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:226)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:112)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:212)
> Sep 01 15:53:20   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testTimestamps(KafkaITCase.java:191)
> Sep 01 15:53:20   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 01 15:53:20   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 01 15:53:20   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 01 15:53:20   at java.lang.reflect.Method.invoke(Method.java:498)
> Sep 01 15:53:20   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Sep 01 15:53:20   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Sep 01 15:53:20   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Sep 01 15:53:20   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Sep 01 15:53:20   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25698) Elasticsearch7DynamicSinkITCase.testWritingDocuments fails on AZP

2022-01-21 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479925#comment-17479925
 ] 

Till Rohrmann commented on FLINK-25698:
---

Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29839&view=logs&j=c91190b6-40ae-57b2-5999-31b869b0a7c1&t=41463ccd-0694-5d4d-220d-8f771e7d098b&l=12662

> Elasticsearch7DynamicSinkITCase.testWritingDocuments fails on AZP
> -
>
> Key: FLINK-25698
> URL: https://issues.apache.org/jira/browse/FLINK-25698
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.14.3
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> The test {{Elasticsearch7DynamicSinkITCase.testWritingDocuments}} fails on 
> AZP with
> {code}
> 2022-01-19T01:36:13.5231872Z Jan 19 01:36:13 [ERROR] Tests run: 4, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 60.838 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2022-01-19T01:36:13.5233438Z Jan 19 01:36:13 [ERROR] testWritingDocuments  
> Time elapsed: 32.146 s  <<< ERROR!
> 2022-01-19T01:36:13.5234330Z Jan 19 01:36:13 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2022-01-19T01:36:13.5235274Z Jan 19 01:36:13  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2022-01-19T01:36:13.5238310Z Jan 19 01:36:13  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
> 2022-01-19T01:36:13.5239309Z Jan 19 01:36:13  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2022-01-19T01:36:13.5239953Z Jan 19 01:36:13  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-01-19T01:36:13.5240822Z Jan 19 01:36:13  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-19T01:36:13.5241441Z Jan 19 01:36:13  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-01-19T01:36:13.5242318Z Jan 19 01:36:13  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258)
> 2022-01-19T01:36:13.5243144Z Jan 19 01:36:13  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-19T01:36:13.5244370Z Jan 19 01:36:13  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-19T01:36:13.5245319Z Jan 19 01:36:13  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-19T01:36:13.5246074Z Jan 19 01:36:13  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-01-19T01:36:13.5246970Z Jan 19 01:36:13  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389)
> 2022-01-19T01:36:13.5247832Z Jan 19 01:36:13  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-01-19T01:36:13.5248788Z Jan 19 01:36:13  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-01-19T01:36:13.5249775Z Jan 19 01:36:13  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-01-19T01:36:13.5250826Z Jan 19 01:36:13  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-19T01:36:13.5251625Z Jan 19 01:36:13  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-19T01:36:13.5252531Z Jan 19 01:36:13  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-19T01:36:13.5253441Z Jan 19 01:36:13  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-01-19T01:36:13.5254118Z Jan 19 01:36:13  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
> 2022-01-19T01:36:13.5254753Z Jan 19 01:36:13  at 
> akka.dispatch.OnComplete.internal(Future.scala:300)
> 2022-01-19T01:36:13.5255381Z Jan 19 01:36:13  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-01-19T01:36:13.5256202Z Jan 19 01:36:13  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-01-19T01:36:13.5256842Z Jan 19 01:36:13  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-01-19T01:36:13.5257400Z Jan 19 01:36:13  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-01-19T01:36:13.5258296Z Jan

[jira] [Assigned] (FLINK-25748) Website misses some Repositories

2022-01-21 Thread Konstantin Knauf (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Knauf reassigned FLINK-25748:


Assignee: Konstantin Knauf

> Website misses some Repositories
> 
>
> Key: FLINK-25748
> URL: https://issues.apache.org/jira/browse/FLINK-25748
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Konstantin Knauf
>Assignee: Konstantin Knauf
>Priority: Minor
>
> https://flink.apache.org/community.html should list 
> * https://github.com/apache/flink-table-store
> * https://github.com/apache/flink-ml
> * https://github.com/apache/flink-benchmarks
> * https://github.com/apache/flink-statefun-playground
> * https://github.com/apache/flink-training
> * https://github.com/apache/flink-playgrounds
> * https://github.com/apache/flink-jira-bot
> * https://github.com/apache/flink-connectors
> As repositories of the project.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25749) YARNSessionFIFOSecuredITCase.testDetachedMode fails on AZP

2022-01-21 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-25749:
-

 Summary: YARNSessionFIFOSecuredITCase.testDetachedMode fails on AZP
 Key: FLINK-25749
 URL: https://issues.apache.org/jira/browse/FLINK-25749
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN
Affects Versions: 1.15.0
Reporter: Till Rohrmann


The test {{YARNSessionFIFOSecuredITCase.testDetachedMode}} fails on AZP:

{code}
2022-01-21T03:28:18.3712993Z Jan 21 03:28:18 java.lang.AssertionError: 
2022-01-21T03:28:18.3715115Z Jan 21 03:28:18 Found a file 
/__w/2/s/flink-yarn-tests/target/flink-yarn-tests-fifo-secured/flink-yarn-tests-fifo-secured-logDir-nm-0_0/application_1642735639007_0002/container_1642735639007_0002_01_01/jobmanager.log
 with a prohibited string (one of [Exception, Started 
SelectChannelConnector@0.0.0.0:8081]). Excerpts:
2022-01-21T03:28:18.3716389Z Jan 21 03:28:18 [
2022-01-21T03:28:18.3717531Z Jan 21 03:28:18 2022-01-21 03:27:56,921 INFO  
org.apache.flink.runtime.resourcemanager.ResourceManagerServiceImpl [] - 
Resource manager service is not running. Ignore revoking leadership.
2022-01-21T03:28:18.3720496Z Jan 21 03:28:18 2022-01-21 03:27:56,922 INFO  
org.apache.flink.runtime.dispatcher.StandaloneDispatcher [] - Stopped 
dispatcher akka.tcp://flink@11c5f741db81:37697/user/rpc/dispatcher_0.
2022-01-21T03:28:18.3722401Z Jan 21 03:28:18 2022-01-21 03:27:56,922 INFO  
org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl [] - 
Interrupted while waiting for queue
2022-01-21T03:28:18.3723661Z Jan 21 03:28:18 java.lang.InterruptedException: 
null
2022-01-21T03:28:18.3724529Z Jan 21 03:28:18at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
 ~[?:1.8.0_292]
2022-01-21T03:28:18.3725450Z Jan 21 03:28:18at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
 ~[?:1.8.0_292]
2022-01-21T03:28:18.3726239Z Jan 21 03:28:18at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
~[?:1.8.0_292]
2022-01-21T03:28:18.3727618Z Jan 21 03:28:18at 
org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:323)
 [hadoop-yarn-client-2.8.5.jar:?]
2022-01-21T03:28:18.3729147Z Jan 21 03:28:18 2022-01-21 03:27:56,927 WARN  
org.apache.hadoop.ipc.Client [] - Failed to 
connect to server: 11c5f741db81/172.25.0.2:39121: retries get failed due to 
exceeded maximum allowed retries number: 0
2022-01-21T03:28:18.3730293Z Jan 21 03:28:18 
java.nio.channels.ClosedByInterruptException: null
2022-01-21T03:28:18.3730834Z Jan 21 03:28:18 
java.nio.channels.ClosedByInterruptException: null
2022-01-21T03:28:18.3731499Z Jan 21 03:28:18at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
 ~[?:1.8.0_292]
2022-01-21T03:28:18.3732203Z Jan 21 03:28:18at 
sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:658) ~[?:1.8.0_292]
2022-01-21T03:28:18.3733478Z Jan 21 03:28:18at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192) 
~[hadoop-common-2.8.5.jar:?]
2022-01-21T03:28:18.3734470Z Jan 21 03:28:18at 
org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) 
~[hadoop-common-2.8.5.jar:?]
2022-01-21T03:28:18.3735432Z Jan 21 03:28:18at 
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:685) 
[hadoop-common-2.8.5.jar:?]
2022-01-21T03:28:18.3736414Z Jan 21 03:28:18at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788) 
[hadoop-common-2.8.5.jar:?]
2022-01-21T03:28:18.3737734Z Jan 21 03:28:18at 
org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:410) 
[hadoop-common-2.8.5.jar:?]
2022-01-21T03:28:18.3738853Z Jan 21 03:28:18at 
org.apache.hadoop.ipc.Client.getConnection(Client.java:1550) 
[hadoop-common-2.8.5.jar:?]
2022-01-21T03:28:18.3739752Z Jan 21 03:28:18at 
org.apache.hadoop.ipc.Client.call(Client.java:1381) [hadoop-common-2.8.5.jar:?]
2022-01-21T03:28:18.3740638Z Jan 21 03:28:18at 
org.apache.hadoop.ipc.Client.call(Client.java:1345) [hadoop-common-2.8.5.jar:?]
2022-01-21T03:28:18.3741589Z Jan 21 03:28:18at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
 [hadoop-common-2.8.5.jar:?]
2022-01-21T03:28:18.3742621Z Jan 21 03:28:18at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
 [hadoop-common-2.8.5.jar:?]
2022-01-21T03:28:18.3743549Z Jan 21 03:28:18at 
com.sun.proxy.$Proxy51.stopContainers(Unknown Source) [?:?]
2022-01-21T03:28:18.3744684Z Jan 21 03:28:18at 
org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.stopContainers(ContainerManagementProtocolPBClientImpl.java:120)
 [hadoop-yarn-com

[jira] [Created] (FLINK-25748) Website misses some Repositories

2022-01-21 Thread Konstantin Knauf (Jira)
Konstantin Knauf created FLINK-25748:


 Summary: Website misses some Repositories
 Key: FLINK-25748
 URL: https://issues.apache.org/jira/browse/FLINK-25748
 Project: Flink
  Issue Type: Bug
  Components: Project Website
Reporter: Konstantin Knauf


https://flink.apache.org/community.html should list 

* https://github.com/apache/flink-table-store
* https://github.com/apache/flink-ml
* https://github.com/apache/flink-benchmarks
* https://github.com/apache/flink-statefun-playground
* https://github.com/apache/flink-training
* https://github.com/apache/flink-playgrounds
* https://github.com/apache/flink-jira-bot
* https://github.com/apache/flink-connectors

As repositories of the project.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zentol commented on a change in pull request #18416: [FLINK-25715][clients] Add deployment option (`execution.submit-failed-job-on-application-error`) for submitting a failed job when

2022-01-21 Thread GitBox


zentol commented on a change in pull request #18416:
URL: https://github.com/apache/flink/pull/18416#discussion_r789479364



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/deployment/application/ApplicationDispatcherBootstrap.java
##
@@ -266,10 +271,22 @@ private void runApplicationEntryPoint(
 final Set tolerateMissingResult,
 final DispatcherGateway dispatcherGateway,
 final ScheduledExecutor scheduledExecutor,
-final boolean enforceSingleJobExecution) {
+final boolean enforceSingleJobExecution,
+final boolean submitFailedJobOnApplicationError) {
+if (submitFailedJobOnApplicationError && !enforceSingleJobExecution) {
+dispatcherGateway.submitFailedJob(
+ZERO_JOB_ID,
+FAILED_JOB_NAME,
+new IllegalStateException(
+String.format(
+"Submission of failed job in case of an 
application error ('%s') is not supported in non-HA setups.",

Review comment:
   > If the exception happens between first and second submission (first 
one has already completed). What job id do we submit the job with?
   
   If the job had run the job ID would be random as well, right? Couldn't we 
use that then?
   
   > using ZERO_JOB_ID might not be correct
   
   We should try to reduce this usage as much as possible, because it is quite 
problematic (e.g., it breaks archiving).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18238: WIP: [FLINK-XXXXX] Task local recovery for the reactive mode.

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18238:
URL: https://github.com/apache/flink/pull/18238#issuecomment-1002686071


   
   ## CI report:
   
   * ba54b091609f7e51792a911192e89e0eb7b6d1d7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28719)
 
   * 8117d232abbcd9f80441b258002c50b2474d6ac7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18268: [FLINK-14902][connector] Supports jdbc async lookup join

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18268:
URL: https://github.com/apache/flink/pull/18268#issuecomment-1005479356


   
   ## CI report:
   
   * fe7d298f1ac142a2fa53241df09eaf93b44ec4be Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29290)
 
   * f64a2fe9f7c1ca9a041641256dc09d00253ce837 UNKNOWN
   * 3bdbb98c9d7831eb50e16102740b7193c5376646 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25749) YARNSessionFIFOSecuredITCase.testDetachedMode fails on AZP

2022-01-21 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479930#comment-17479930
 ] 

Till Rohrmann commented on FLINK-25749:
---

Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29841&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=d04c9862-880c-52f5-574b-a7a79fef8e0f

> YARNSessionFIFOSecuredITCase.testDetachedMode fails on AZP
> --
>
> Key: FLINK-25749
> URL: https://issues.apache.org/jira/browse/FLINK-25749
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> The test {{YARNSessionFIFOSecuredITCase.testDetachedMode}} fails on AZP:
> {code}
> 2022-01-21T03:28:18.3712993Z Jan 21 03:28:18 java.lang.AssertionError: 
> 2022-01-21T03:28:18.3715115Z Jan 21 03:28:18 Found a file 
> /__w/2/s/flink-yarn-tests/target/flink-yarn-tests-fifo-secured/flink-yarn-tests-fifo-secured-logDir-nm-0_0/application_1642735639007_0002/container_1642735639007_0002_01_01/jobmanager.log
>  with a prohibited string (one of [Exception, Started 
> SelectChannelConnector@0.0.0.0:8081]). Excerpts:
> 2022-01-21T03:28:18.3716389Z Jan 21 03:28:18 [
> 2022-01-21T03:28:18.3717531Z Jan 21 03:28:18 2022-01-21 03:27:56,921 INFO  
> org.apache.flink.runtime.resourcemanager.ResourceManagerServiceImpl [] - 
> Resource manager service is not running. Ignore revoking leadership.
> 2022-01-21T03:28:18.3720496Z Jan 21 03:28:18 2022-01-21 03:27:56,922 INFO  
> org.apache.flink.runtime.dispatcher.StandaloneDispatcher [] - Stopped 
> dispatcher akka.tcp://flink@11c5f741db81:37697/user/rpc/dispatcher_0.
> 2022-01-21T03:28:18.3722401Z Jan 21 03:28:18 2022-01-21 03:27:56,922 INFO  
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl [] - 
> Interrupted while waiting for queue
> 2022-01-21T03:28:18.3723661Z Jan 21 03:28:18 java.lang.InterruptedException: 
> null
> 2022-01-21T03:28:18.3724529Z Jan 21 03:28:18  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
>  ~[?:1.8.0_292]
> 2022-01-21T03:28:18.3725450Z Jan 21 03:28:18  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
>  ~[?:1.8.0_292]
> 2022-01-21T03:28:18.3726239Z Jan 21 03:28:18  at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
> ~[?:1.8.0_292]
> 2022-01-21T03:28:18.3727618Z Jan 21 03:28:18  at 
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:323)
>  [hadoop-yarn-client-2.8.5.jar:?]
> 2022-01-21T03:28:18.3729147Z Jan 21 03:28:18 2022-01-21 03:27:56,927 WARN  
> org.apache.hadoop.ipc.Client [] - Failed to 
> connect to server: 11c5f741db81/172.25.0.2:39121: retries get failed due to 
> exceeded maximum allowed retries number: 0
> 2022-01-21T03:28:18.3730293Z Jan 21 03:28:18 
> java.nio.channels.ClosedByInterruptException: null
> 2022-01-21T03:28:18.3730834Z Jan 21 03:28:18 
> java.nio.channels.ClosedByInterruptException: null
> 2022-01-21T03:28:18.3731499Z Jan 21 03:28:18  at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>  ~[?:1.8.0_292]
> 2022-01-21T03:28:18.3732203Z Jan 21 03:28:18  at 
> sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:658) 
> ~[?:1.8.0_292]
> 2022-01-21T03:28:18.3733478Z Jan 21 03:28:18  at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>  ~[hadoop-common-2.8.5.jar:?]
> 2022-01-21T03:28:18.3734470Z Jan 21 03:28:18  at 
> org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) 
> ~[hadoop-common-2.8.5.jar:?]
> 2022-01-21T03:28:18.3735432Z Jan 21 03:28:18  at 
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:685) 
> [hadoop-common-2.8.5.jar:?]
> 2022-01-21T03:28:18.3736414Z Jan 21 03:28:18  at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788) 
> [hadoop-common-2.8.5.jar:?]
> 2022-01-21T03:28:18.3737734Z Jan 21 03:28:18  at 
> org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:410) 
> [hadoop-common-2.8.5.jar:?]
> 2022-01-21T03:28:18.3738853Z Jan 21 03:28:18  at 
> org.apache.hadoop.ipc.Client.getConnection(Client.java:1550) 
> [hadoop-common-2.8.5.jar:?]
> 2022-01-21T03:28:18.3739752Z Jan 21 03:28:18  at 
> org.apache.hadoop.ipc.Client.call(Client.java:1381) 
> [hadoop-common-2.8.5.jar:?]
> 2022-01-21T03:28:18.3740638Z Jan 21 03:28:18  at 
> org.apache.hadoop.ipc.Client.call(Client.java:1345) 
> [hadoop-common-2.8.5.jar:?]
> 2022-01-21T03:28:18.3741589Z Jan 21 03:28:18  at 
> o

[jira] [Commented] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479931#comment-17479931
 ] 

Jane Chan commented on FLINK-25746:
---

By simply adding the guava as test dependencies to flink-orc and flink-parquet, 
we can fix it. But I'm not sure what the root cause is.

My gut feeling is related to FLINK-25128

> Failed to run ITCase locally with IDEA under flink-orc and flink-parquet 
> module
> ---
>
> Key: FLINK-25746
> URL: https://issues.apache.org/jira/browse/FLINK-25746
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.15.0
>Reporter: Jane Chan
>Priority: Major
> Attachments: image-2022-01-21-16-54-12-354.png, 
> image-2022-01-21-16-56-42-156.png
>
>
> Recently, it has been observed that several integration test cases failed 
> when running from IDEA locally, but running them from the maven command line 
> is OK.
> h4. How to reproduce
> {code:java}
> // switch to master branch
> git fetch origin
> git rebase origin/master
> mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
> {code}
> Then run the following tests from IntelliJ IDEA
> h4. The affected tests
> {code:java}
> org.apache.flink.orc.OrcFileSystemITCase
> org.apache.flink.orc.OrcFsStreamingSinkITCase
> org.apache.flink.formats.parquet.ParquetFileCompactionITCase
> org.apache.flink.formats.parquet.ParquetFileSystemITCase
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
> h4. The stack trace
> !image-2022-01-21-16-54-12-354.png!
> {code:java}
> java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
> org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
>     at 
> org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
>     at org.apache.calcite.util.Util.(Util.java:152)
>     at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
>     at 
> org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
>     at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
>     at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
>     at 
> org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
>     at 
> org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
>     at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
>     at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
>     at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
>     at 
> org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
>     at 
> org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>     at org.junit.rules.ExternalRes

[jira] [Commented] (FLINK-25748) Website misses some Repositories

2022-01-21 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479933#comment-17479933
 ] 

Chesnay Schepler commented on FLINK-25748:
--

I'm wondering if we should just link to 
https://gitbox.apache.org/repos/asf#flink to avoid the maintenance overhead.

> Website misses some Repositories
> 
>
> Key: FLINK-25748
> URL: https://issues.apache.org/jira/browse/FLINK-25748
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Konstantin Knauf
>Assignee: Konstantin Knauf
>Priority: Minor
>
> https://flink.apache.org/community.html should list 
> * https://github.com/apache/flink-table-store
> * https://github.com/apache/flink-ml
> * https://github.com/apache/flink-benchmarks
> * https://github.com/apache/flink-statefun-playground
> * https://github.com/apache/flink-training
> * https://github.com/apache/flink-playgrounds
> * https://github.com/apache/flink-jira-bot
> * https://github.com/apache/flink-connectors
> As repositories of the project.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25748) Website misses some Repositories

2022-01-21 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479934#comment-17479934
 ] 

Chesnay Schepler commented on FLINK-25748:
--

Especially since we're gonna be adding a whole bunch of additional repos for 
the connectors soon...

> Website misses some Repositories
> 
>
> Key: FLINK-25748
> URL: https://issues.apache.org/jira/browse/FLINK-25748
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Konstantin Knauf
>Assignee: Konstantin Knauf
>Priority: Minor
>
> https://flink.apache.org/community.html should list 
> * https://github.com/apache/flink-table-store
> * https://github.com/apache/flink-ml
> * https://github.com/apache/flink-benchmarks
> * https://github.com/apache/flink-statefun-playground
> * https://github.com/apache/flink-training
> * https://github.com/apache/flink-playgrounds
> * https://github.com/apache/flink-jira-bot
> * https://github.com/apache/flink-connectors
> As repositories of the project.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25750) Performance regression on 20.01.2021 in globalWindow and stateBackend benchmarks

2022-01-21 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-25750:
-

 Summary: Performance regression on 20.01.2021 in globalWindow and 
stateBackend benchmarks
 Key: FLINK-25750
 URL: https://issues.apache.org/jira/browse/FLINK-25750
 Project: Flink
  Issue Type: Bug
  Components: Benchmarks, Runtime / State Backends
Affects Versions: 1.15.0
Reporter: Roman Khachatryan
 Fix For: 1.15.0


http://codespeed.dak8s.net:8000/timeline/#/?exe=1,3&ben=globalWindow&env=2&revs=200&equid=off&quarts=on&extr=on
http://codespeed.dak8s.net:8000/timeline/#/?exe=1,3&ben=stateBackends.FS&env=2&revs=200&equid=off&quarts=on&extr=on
http://codespeed.dak8s.net:8000/timeline/#/?exe=1,3&ben=stateBackends.FS&env=2&revs=200&equid=off&quarts=on&extr=on



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zentol commented on a change in pull request #18416: [FLINK-25715][clients] Add deployment option (`execution.submit-failed-job-on-application-error`) for submitting a failed job when

2022-01-21 Thread GitBox


zentol commented on a change in pull request #18416:
URL: https://github.com/apache/flink/pull/18416#discussion_r789479364



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/deployment/application/ApplicationDispatcherBootstrap.java
##
@@ -266,10 +271,22 @@ private void runApplicationEntryPoint(
 final Set tolerateMissingResult,
 final DispatcherGateway dispatcherGateway,
 final ScheduledExecutor scheduledExecutor,
-final boolean enforceSingleJobExecution) {
+final boolean enforceSingleJobExecution,
+final boolean submitFailedJobOnApplicationError) {
+if (submitFailedJobOnApplicationError && !enforceSingleJobExecution) {
+dispatcherGateway.submitFailedJob(
+ZERO_JOB_ID,
+FAILED_JOB_NAME,
+new IllegalStateException(
+String.format(
+"Submission of failed job in case of an 
application error ('%s') is not supported in non-HA setups.",

Review comment:
   > If the exception happens between first and second submission (first 
one has already completed). What job id do we submit the job with?
   
   If the job had run the job ID would be random as well, right? Couldn't we 
use that then?
   
   > using ZERO_JOB_ID might not be correct
   
   We should try to reduce this usage as much as possible, because it is quite 
problematic (e.g., it breaks archiving).
   (Ideally we find a way to have a proper user-facing job ID)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25748) Website misses some Repositories

2022-01-21 Thread Konstantin Knauf (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479936#comment-17479936
 ] 

Konstantin Knauf commented on FLINK-25748:
--

Good point. I'd say we list the main repository and link to gitbox for the full 
list. 

> Website misses some Repositories
> 
>
> Key: FLINK-25748
> URL: https://issues.apache.org/jira/browse/FLINK-25748
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Konstantin Knauf
>Assignee: Konstantin Knauf
>Priority: Minor
>
> https://flink.apache.org/community.html should list 
> * https://github.com/apache/flink-table-store
> * https://github.com/apache/flink-ml
> * https://github.com/apache/flink-benchmarks
> * https://github.com/apache/flink-statefun-playground
> * https://github.com/apache/flink-training
> * https://github.com/apache/flink-playgrounds
> * https://github.com/apache/flink-jira-bot
> * https://github.com/apache/flink-connectors
> As repositories of the project.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18238: WIP: [FLINK-XXXXX] Task local recovery for the reactive mode.

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18238:
URL: https://github.com/apache/flink/pull/18238#issuecomment-1002686071


   
   ## CI report:
   
   * ba54b091609f7e51792a911192e89e0eb7b6d1d7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28719)
 
   * 8117d232abbcd9f80441b258002c50b2474d6ac7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29874)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18268: [FLINK-14902][connector] Supports jdbc async lookup join

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18268:
URL: https://github.com/apache/flink/pull/18268#issuecomment-1005479356


   
   ## CI report:
   
   * fe7d298f1ac142a2fa53241df09eaf93b44ec4be Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29290)
 
   * f64a2fe9f7c1ca9a041641256dc09d00253ce837 UNKNOWN
   * 3bdbb98c9d7831eb50e16102740b7193c5376646 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29873)
 
   * 259aeca4d91fc0adfaef93fdba9aa872a866a3c8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18414: [hotfix][docs]fix flink sql Cascading Window TVF Aggregation exception

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18414:
URL: https://github.com/apache/flink/pull/18414#issuecomment-1017182067


   
   ## CI report:
   
   * 2d7b1b7425be2b9cfb9020d92cf08cc6d5596ef6 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29864)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dmvk commented on a change in pull request #18416: [FLINK-25715][clients] Add deployment option (`execution.submit-failed-job-on-application-error`) for submitting a failed job when t

2022-01-21 Thread GitBox


dmvk commented on a change in pull request #18416:
URL: https://github.com/apache/flink/pull/18416#discussion_r789487585



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/deployment/application/ApplicationDispatcherBootstrap.java
##
@@ -266,10 +271,22 @@ private void runApplicationEntryPoint(
 final Set tolerateMissingResult,
 final DispatcherGateway dispatcherGateway,
 final ScheduledExecutor scheduledExecutor,
-final boolean enforceSingleJobExecution) {
+final boolean enforceSingleJobExecution,
+final boolean submitFailedJobOnApplicationError) {
+if (submitFailedJobOnApplicationError && !enforceSingleJobExecution) {
+dispatcherGateway.submitFailedJob(
+ZERO_JOB_ID,
+FAILED_JOB_NAME,
+new IllegalStateException(
+String.format(
+"Submission of failed job in case of an 
application error ('%s') is not supported in non-HA setups.",

Review comment:
   > If the job had run the job ID would be random as well, right? Couldn't 
we use that then?
   
   For this to be useful, the user should know the jobId upfront (that's one of 
the reasons for supporting the single execution mode only). Also this is not 
really an exception from the "application driver", but just an unsupported 
combination of configurations.
   
   I think the current approach should be sufficient for now (failing the whole 
dispatcher bootstrap). Also it's an experimental feature, so we can reiterate 
on this later if we find this confusing.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25470) Add/Expose/Differentiate metrics of checkpoint size between changelog size vs materialization size

2022-01-21 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479939#comment-17479939
 ] 

Roman Khachatryan commented on FLINK-25470:
---

> How much Data Size increases/exploding
I think this is precisely answered by total checkpoint size.

> changelog sizes from the last complete checkpoint (that can roughly infer 
> restore time)
+1

[~ym] 
could you please also add the motivation for these two items:
> When a checkpoint includes a new Materialization
> Materialization size

And regardless, the UI already seems a bit overloaded, we'll probably need to 
add a separate checkpoint details page.

> Add/Expose/Differentiate metrics of checkpoint size between changelog size vs 
> materialization size
> --
>
> Key: FLINK-25470
> URL: https://issues.apache.org/jira/browse/FLINK-25470
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Yuan Mei
>Priority: Major
> Attachments: Screen Shot 2021-12-29 at 1.09.48 PM.png
>
>
> FLINK-25557  only resolves part of the problems. 
> Eventually, we should answer questions:
>  * How much Data Size increases/exploding
>  * When a checkpoint includes a new Materialization
>  * Materialization size
>  * changelog sizes from the last complete checkpoint (that can roughly infer 
> restore time)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18268: [FLINK-14902][connector] Supports jdbc async lookup join

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18268:
URL: https://github.com/apache/flink/pull/18268#issuecomment-1005479356


   
   ## CI report:
   
   * fe7d298f1ac142a2fa53241df09eaf93b44ec4be Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29290)
 
   * f64a2fe9f7c1ca9a041641256dc09d00253ce837 UNKNOWN
   * 3bdbb98c9d7831eb50e16102740b7193c5376646 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29873)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-25750) Performance regression on 20.01.2021 in globalWindow and stateBackend benchmarks

2022-01-21 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan reassigned FLINK-25750:
-

Assignee: Roman Khachatryan

> Performance regression on 20.01.2021 in globalWindow and stateBackend 
> benchmarks
> 
>
> Key: FLINK-25750
> URL: https://issues.apache.org/jira/browse/FLINK-25750
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks, Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Critical
> Fix For: 1.15.0
>
>
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1,3&ben=globalWindow&env=2&revs=200&equid=off&quarts=on&extr=on
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1,3&ben=stateBackends.FS&env=2&revs=200&equid=off&quarts=on&extr=on
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1,3&ben=stateBackends.FS&env=2&revs=200&equid=off&quarts=on&extr=on



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18427: [FLINK-25386][table] Harden table persisted plan

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18427:
URL: https://github.com/apache/flink/pull/18427#issuecomment-1017644617


   
   ## CI report:
   
   * a69e8836becd5bbdedd183376c67dae35afc2960 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29812)
 
   * b83fd153ef3df414e6a7766e26ccd37f788d728e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul merged pull request #18411: [FLINK-23944][test][pulsar] 1. change the Matcher to validate both size and data 2. pulsar IT test generate deterministic data

2022-01-21 Thread GitBox


fapaul merged pull request #18411:
URL: https://github.com/apache/flink/pull/18411


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul merged pull request #18288: [FLINK-20188][Connectors][Docs][FileSystem] Added documentation for File Source

2022-01-21 Thread GitBox


fapaul merged pull request #18288:
URL: https://github.com/apache/flink/pull/18288


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20188) Add Documentation for new File Source

2022-01-21 Thread Fabian Paul (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479944#comment-17479944
 ] 

Fabian Paul commented on FLINK-20188:
-

Merged in master: 3dbb4974a4d9bd5d77be7b57dde1f330c01a650d

> Add Documentation for new File Source
> -
>
> Key: FLINK-20188
> URL: https://issues.apache.org/jira/browse/FLINK-20188
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / FileSystem, Documentation
>Affects Versions: 1.14.0, 1.13.3, 1.15.0
>Reporter: Stephan Ewen
>Assignee: Martijn Visser
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0
>
> Attachments: image-2021-11-16-11-42-32-957.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] fapaul merged pull request #18373: [FLINK-20188][BP 1.14][Connectors][Docs][FileSystem] Added documentation for FileSource

2022-01-21 Thread GitBox


fapaul merged pull request #18373:
URL: https://github.com/apache/flink/pull/18373


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-20188) Add Documentation for new File Source

2022-01-21 Thread Fabian Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Paul resolved FLINK-20188.
-
Fix Version/s: 1.14.4
   Resolution: Fixed

> Add Documentation for new File Source
> -
>
> Key: FLINK-20188
> URL: https://issues.apache.org/jira/browse/FLINK-20188
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / FileSystem, Documentation
>Affects Versions: 1.14.0, 1.13.3, 1.15.0
>Reporter: Stephan Ewen
>Assignee: Martijn Visser
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.14.4
>
> Attachments: image-2021-11-16-11-42-32-957.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-20188) Add Documentation for new File Source

2022-01-21 Thread Fabian Paul (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479945#comment-17479945
 ] 

Fabian Paul commented on FLINK-20188:
-

Merged in release-1.14: 303e3064f27d8d648f6b745a62ec12707f3c5cf6

> Add Documentation for new File Source
> -
>
> Key: FLINK-20188
> URL: https://issues.apache.org/jira/browse/FLINK-20188
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / FileSystem, Documentation
>Affects Versions: 1.14.0, 1.13.3, 1.15.0
>Reporter: Stephan Ewen
>Assignee: Martijn Visser
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0
>
> Attachments: image-2021-11-16-11-42-32-957.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18268: [FLINK-14902][connector] Supports jdbc async lookup join

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18268:
URL: https://github.com/apache/flink/pull/18268#issuecomment-1005479356


   
   ## CI report:
   
   * fe7d298f1ac142a2fa53241df09eaf93b44ec4be Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29290)
 
   * f64a2fe9f7c1ca9a041641256dc09d00253ce837 UNKNOWN
   * 3bdbb98c9d7831eb50e16102740b7193c5376646 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29873)
 
   * 259aeca4d91fc0adfaef93fdba9aa872a866a3c8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18348: [FLINK-24905][connectors/kinesis] Adding support for Kinesis DataStream sink as kinesis table connector sink.

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18348:
URL: https://github.com/apache/flink/pull/18348#issuecomment-1011977937


   
   ## CI report:
   
   * 6a5494ae1f78bae86a9a5612d4a0e6ab973005db Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29810)
 
   * e924187c8b00f993aed7d56774f2ee168fe750ea UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18427: [FLINK-25386][table] Harden table persisted plan

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18427:
URL: https://github.com/apache/flink/pull/18427#issuecomment-1017644617


   
   ## CI report:
   
   * a69e8836becd5bbdedd183376c67dae35afc2960 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29812)
 
   * b83fd153ef3df414e6a7766e26ccd37f788d728e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29876)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] LadyForest opened a new pull request #18438: [hotfix][filesystem] Fix the typo in InProgressFileWriter

2022-01-21 Thread GitBox


LadyForest opened a new pull request #18438:
URL: https://github.com/apache/flink/pull/18438


   ## What is the purpose of the change
   
   This PR is trivial, nothing but fixing a typo in `InProgressFileWriter`
   
   
   ## Brief changelog
   
   `InProgressFileWriter`
 - a element => an element
   
   
   ## Verifying this change
   
   This change is a trivial rework/code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #18416: [FLINK-25715][clients] Add deployment option (`execution.submit-failed-job-on-application-error`) for submitting a failed job when

2022-01-21 Thread GitBox


zentol commented on a change in pull request #18416:
URL: https://github.com/apache/flink/pull/18416#discussion_r789500760



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/deployment/application/ApplicationDispatcherBootstrap.java
##
@@ -266,10 +271,22 @@ private void runApplicationEntryPoint(
 final Set tolerateMissingResult,
 final DispatcherGateway dispatcherGateway,
 final ScheduledExecutor scheduledExecutor,
-final boolean enforceSingleJobExecution) {
+final boolean enforceSingleJobExecution,
+final boolean submitFailedJobOnApplicationError) {
+if (submitFailedJobOnApplicationError && !enforceSingleJobExecution) {
+dispatcherGateway.submitFailedJob(
+ZERO_JOB_ID,
+FAILED_JOB_NAME,
+new IllegalStateException(
+String.format(
+"Submission of failed job in case of an 
application error ('%s') is not supported in non-HA setups.",

Review comment:
   > For this to be useful, the user should know the jobId upfront
   
   I don't think that's really true; outside of application mode users don't 
know the job ID upfront.
   The job name & stacktrace are the identifiable bits imo.
   
   * the stacktrace provides information on where it failed
   * the job name could be something like "Job #4" or "Job after ", "UserClass#Line", 
anything that is reasonable deterministic.
   
   > I think the current approach should be sufficient for now (failing the 
whole dispatcher bootstrap)
   
   It's not documented though ;) Neither for users (config docs) and devs 
(comment).
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18438: [hotfix][filesystem] Fix the typo in InProgressFileWriter

2022-01-21 Thread GitBox


flinkbot commented on pull request #18438:
URL: https://github.com/apache/flink/pull/18438#issuecomment-1018341455


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4265d8adf7ac94568ba86adf9303300696ec6424 (Fri Jan 21 
09:40:25 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23944) PulsarSourceITCase.testTaskManagerFailure is instable

2022-01-21 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479947#comment-17479947
 ] 

Martijn Visser commented on FLINK-23944:


Merged b4fc1c8ab69a6bfaec27d5044d1aef7c1453b2ac into master to get better 
insights in Pulsar instabilities. 

> PulsarSourceITCase.testTaskManagerFailure is instable
> -
>
> Key: FLINK-23944
> URL: https://issues.apache.org/jira/browse/FLINK-23944
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.14.0
>Reporter: Dian Fu
>Assignee: Yufei Zhang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> [https://dev.azure.com/dianfu/Flink/_build/results?buildId=430&view=logs&j=f3dc9b18-b77a-55c1-591e-264c46fe44d1&t=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d]
> It's from my personal azure pipeline, however, I'm pretty sure that I have 
> not touched any code related to this. 
> {code:java}
> Aug 24 10:44:13 [ERROR] testTaskManagerFailure{TestEnvironment, 
> ExternalContext, ClusterControllable}[1] Time elapsed: 258.397 s <<< FAILURE! 
> Aug 24 10:44:13 java.lang.AssertionError: Aug 24 10:44:13 Aug 24 10:44:13 
> Expected: Records consumed by Flink should be identical to test data and 
> preserve the order in split Aug 24 10:44:13 but: Mismatched record at 
> position 7: Expected '0W6SzacX7MNL4xLL3BZ8C3ljho4iCydbvxIl' but was 
> 'wVi5JaJpNvgkDEOBRC775qHgw0LyRW2HBxwLmfONeEmr' Aug 24 10:44:13 at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) Aug 24 10:44:13 
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) Aug 24 
> 10:44:13 at 
> org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testTaskManagerFailure(SourceTestSuiteBase.java:271)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18268: [FLINK-14902][connector] Supports jdbc async lookup join

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18268:
URL: https://github.com/apache/flink/pull/18268#issuecomment-1005479356


   
   ## CI report:
   
   * fe7d298f1ac142a2fa53241df09eaf93b44ec4be Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29290)
 
   * f64a2fe9f7c1ca9a041641256dc09d00253ce837 UNKNOWN
   * 3bdbb98c9d7831eb50e16102740b7193c5376646 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29873)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #18416: [FLINK-25715][clients] Add deployment option (`execution.submit-failed-job-on-application-error`) for submitting a failed job when

2022-01-21 Thread GitBox


zentol commented on a change in pull request #18416:
URL: https://github.com/apache/flink/pull/18416#discussion_r789500760



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/deployment/application/ApplicationDispatcherBootstrap.java
##
@@ -266,10 +271,22 @@ private void runApplicationEntryPoint(
 final Set tolerateMissingResult,
 final DispatcherGateway dispatcherGateway,
 final ScheduledExecutor scheduledExecutor,
-final boolean enforceSingleJobExecution) {
+final boolean enforceSingleJobExecution,
+final boolean submitFailedJobOnApplicationError) {
+if (submitFailedJobOnApplicationError && !enforceSingleJobExecution) {
+dispatcherGateway.submitFailedJob(
+ZERO_JOB_ID,
+FAILED_JOB_NAME,
+new IllegalStateException(
+String.format(
+"Submission of failed job in case of an 
application error ('%s') is not supported in non-HA setups.",

Review comment:
   > For this to be useful, the user should know the jobId upfront
   
   I don't think that's really true; outside of application mode users don't 
know the job ID upfront.
   The job name & stacktrace are the identifiable bits imo.
   
   * the stacktrace provides information on where it failed
   * the job name could be something like "Job # 4" or "Job after ", "UserClass#Line", 
anything that is reasonable deterministic.
   
   > I think the current approach should be sufficient for now (failing the 
whole dispatcher bootstrap)
   
   It's not documented though ;) Neither for users (config docs) and devs 
(comment).
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #18416: [FLINK-25715][clients] Add deployment option (`execution.submit-failed-job-on-application-error`) for submitting a failed job when

2022-01-21 Thread GitBox


zentol commented on a change in pull request #18416:
URL: https://github.com/apache/flink/pull/18416#discussion_r789500760



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/deployment/application/ApplicationDispatcherBootstrap.java
##
@@ -266,10 +271,22 @@ private void runApplicationEntryPoint(
 final Set tolerateMissingResult,
 final DispatcherGateway dispatcherGateway,
 final ScheduledExecutor scheduledExecutor,
-final boolean enforceSingleJobExecution) {
+final boolean enforceSingleJobExecution,
+final boolean submitFailedJobOnApplicationError) {
+if (submitFailedJobOnApplicationError && !enforceSingleJobExecution) {
+dispatcherGateway.submitFailedJob(
+ZERO_JOB_ID,
+FAILED_JOB_NAME,
+new IllegalStateException(
+String.format(
+"Submission of failed job in case of an 
application error ('%s') is not supported in non-HA setups.",

Review comment:
   > For this to be useful, the user should know the jobId upfront
   
   I don't think that's really true; outside of application mode users don't 
know the job ID upfront.
   The job name & stacktrace are the identifiable bits imo.
   
   * the stacktrace provides information on where it failed
   * the job name could be something like "Job # 4" or "Job after \", "UserClass#Line", 
anything that is reasonable deterministic.
   
   > I think the current approach should be sufficient for now (failing the 
whole dispatcher bootstrap)
   
   It's not documented though ;) Neither for users (config docs) and devs 
(comment).
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on a change in pull request #18416: [FLINK-25715][clients] Add deployment option (`execution.submit-failed-job-on-application-error`) for submitting a failed job when

2022-01-21 Thread GitBox


zentol commented on a change in pull request #18416:
URL: https://github.com/apache/flink/pull/18416#discussion_r789500760



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/deployment/application/ApplicationDispatcherBootstrap.java
##
@@ -266,10 +271,22 @@ private void runApplicationEntryPoint(
 final Set tolerateMissingResult,
 final DispatcherGateway dispatcherGateway,
 final ScheduledExecutor scheduledExecutor,
-final boolean enforceSingleJobExecution) {
+final boolean enforceSingleJobExecution,
+final boolean submitFailedJobOnApplicationError) {
+if (submitFailedJobOnApplicationError && !enforceSingleJobExecution) {
+dispatcherGateway.submitFailedJob(
+ZERO_JOB_ID,
+FAILED_JOB_NAME,
+new IllegalStateException(
+String.format(
+"Submission of failed job in case of an 
application error ('%s') is not supported in non-HA setups.",

Review comment:
   > For this to be useful, the user should know the jobId upfront
   
   I don't think that's really true; outside of application mode users don't 
know the job ID upfront.
   The job name & stacktrace are the identifiable bits imo.
   
   * the stacktrace provides information on where it failed
   * the job name could be something like "Job # 4" or "Job after \", "UserClass#Line\", 
anything that is reasonable deterministic.
   
   > I think the current approach should be sufficient for now (failing the 
whole dispatcher bootstrap)
   
   It's not documented though ;) Neither for users (config docs) and devs 
(comment).
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-25740) PulsarSourceOrderedE2ECase fails on azure

2022-01-21 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-25740:
--

Assignee: Yufei Zhang

> PulsarSourceOrderedE2ECase fails on azure
> -
>
> Key: FLINK-25740
> URL: https://issues.apache.org/jira/browse/FLINK-25740
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Assignee: Yufei Zhang
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29789&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a&l=16385
> {code}
> [ERROR] Errors:
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testIdleReader:187->SourceTestSuiteBase.gene
>  rateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testIdleReader:187->SourceTestSuiteBase.gene
>  rateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testMultipleSplits:145->SourceTestSuiteBase.
>  generateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testMultipleSplits:145->SourceTestSuiteBase.
>  generateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testSourceSingleSplit:105->SourceTestSuiteBa
>  se.generateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testSourceSingleSplit:105->SourceTestSuiteBa
>  se.generateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testTaskManagerFailure:232 » 
> BrokerPersisten ce
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testTaskManagerFailure:232 » 
> BrokerPersisten ce
> [ERROR]   
> PulsarSourceUnorderedE2ECase>UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers
>  :60 » BrokerPersistence
> [ERROR]   
> PulsarSourceUnorderedE2ECase>UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers
>  :60 » BrokerPersistence
> {code}
> {code}
> 2022-01-20T15:28:37.1467261Z Jan 20 15:28:37 [ERROR] 
> org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase.testOneSplitWithMultipleConsumers(TestEnvironment,
>  ExternalContext)[2]  Time elapsed: 77.698 s  <<< ERROR!
> 2022-01-20T15:28:37.1469146Z Jan 20 15:28:37 
> org.apache.pulsar.client.api.PulsarClientException$BrokerPersistenceException:
>  org.apache.bookkeeper.mledger.ManagedLedgerException: Not enough non-faulty 
> bookies available
> 2022-01-20T15:28:37.1470062Z Jan 20 15:28:37at 
> org.apache.pulsar.client.api.PulsarClientException.unwrap(PulsarClientException.java:985)
> 2022-01-20T15:28:37.1470802Z Jan 20 15:28:37at 
> org.apache.pulsar.client.impl.ProducerBuilderImpl.create(ProducerBuilderImpl.java:95)
> 2022-01-20T15:28:37.1471598Z Jan 20 15:28:37at 
> org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.sendMessages(PulsarRuntimeOperator.java:172)
> 2022-01-20T15:28:37.1472451Z Jan 20 15:28:37at 
> org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.sendMessages(PulsarRuntimeOperator.java:167)
> 2022-01-20T15:28:37.1473307Z Jan 20 15:28:37at 
> org.apache.flink.connector.pulsar.testutils.PulsarPartitionDataWriter.writeRecords(PulsarPartitionDataWriter.java:41)
> 2022-01-20T15:28:37.1474209Z Jan 20 15:28:37at 
> org.apache.flink.tests.util.pulsar.common.UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers(UnorderedSourceTestSuiteBase.java:60)
> 2022-01-20T15:28:37.1474949Z Jan 20 15:28:37at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-20T15:28:37.1475658Z Jan 20 15:28:37at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-20T15:28:37.1476383Z Jan 20 15:28:37at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-20T15:28:37.1477030Z Jan 20 15:28:37at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-20T15:28:37.1477670Z Jan 20 15:28:37at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
> 2022-01-20T15:28:37.1478388Z Jan 20 15:28:37at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18348: [FLINK-24905][connectors/kinesis] Adding support for Kinesis DataStream sink as kinesis table connector sink.

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18348:
URL: https://github.com/apache/flink/pull/18348#issuecomment-1011977937


   
   ## CI report:
   
   * 6a5494ae1f78bae86a9a5612d4a0e6ab973005db Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29810)
 
   * e924187c8b00f993aed7d56774f2ee168fe750ea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29877)
 
   * 6ed3a6b636531103bc1429a35084ca3d0a1f8616 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25740) PulsarSourceOrderedE2ECase fails on azure

2022-01-21 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479948#comment-17479948
 ] 

Martijn Visser commented on FLINK-25740:


Merged b4fc1c8ab69a6bfaec27d5044d1aef7c1453b2ac into master to get better 
insights in Pulsar instabilities.

Also see FLINK-23944

> PulsarSourceOrderedE2ECase fails on azure
> -
>
> Key: FLINK-25740
> URL: https://issues.apache.org/jira/browse/FLINK-25740
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29789&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a&l=16385
> {code}
> [ERROR] Errors:
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testIdleReader:187->SourceTestSuiteBase.gene
>  rateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testIdleReader:187->SourceTestSuiteBase.gene
>  rateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testMultipleSplits:145->SourceTestSuiteBase.
>  generateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testMultipleSplits:145->SourceTestSuiteBase.
>  generateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testSourceSingleSplit:105->SourceTestSuiteBa
>  se.generateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testSourceSingleSplit:105->SourceTestSuiteBa
>  se.generateAndWriteTestData:315 » BrokerPersistence
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testTaskManagerFailure:232 » 
> BrokerPersisten ce
> [ERROR]   
> PulsarSourceOrderedE2ECase>SourceTestSuiteBase.testTaskManagerFailure:232 » 
> BrokerPersisten ce
> [ERROR]   
> PulsarSourceUnorderedE2ECase>UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers
>  :60 » BrokerPersistence
> [ERROR]   
> PulsarSourceUnorderedE2ECase>UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers
>  :60 » BrokerPersistence
> {code}
> {code}
> 2022-01-20T15:28:37.1467261Z Jan 20 15:28:37 [ERROR] 
> org.apache.flink.tests.util.pulsar.PulsarSourceUnorderedE2ECase.testOneSplitWithMultipleConsumers(TestEnvironment,
>  ExternalContext)[2]  Time elapsed: 77.698 s  <<< ERROR!
> 2022-01-20T15:28:37.1469146Z Jan 20 15:28:37 
> org.apache.pulsar.client.api.PulsarClientException$BrokerPersistenceException:
>  org.apache.bookkeeper.mledger.ManagedLedgerException: Not enough non-faulty 
> bookies available
> 2022-01-20T15:28:37.1470062Z Jan 20 15:28:37at 
> org.apache.pulsar.client.api.PulsarClientException.unwrap(PulsarClientException.java:985)
> 2022-01-20T15:28:37.1470802Z Jan 20 15:28:37at 
> org.apache.pulsar.client.impl.ProducerBuilderImpl.create(ProducerBuilderImpl.java:95)
> 2022-01-20T15:28:37.1471598Z Jan 20 15:28:37at 
> org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.sendMessages(PulsarRuntimeOperator.java:172)
> 2022-01-20T15:28:37.1472451Z Jan 20 15:28:37at 
> org.apache.flink.connector.pulsar.testutils.runtime.PulsarRuntimeOperator.sendMessages(PulsarRuntimeOperator.java:167)
> 2022-01-20T15:28:37.1473307Z Jan 20 15:28:37at 
> org.apache.flink.connector.pulsar.testutils.PulsarPartitionDataWriter.writeRecords(PulsarPartitionDataWriter.java:41)
> 2022-01-20T15:28:37.1474209Z Jan 20 15:28:37at 
> org.apache.flink.tests.util.pulsar.common.UnorderedSourceTestSuiteBase.testOneSplitWithMultipleConsumers(UnorderedSourceTestSuiteBase.java:60)
> 2022-01-20T15:28:37.1474949Z Jan 20 15:28:37at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-20T15:28:37.1475658Z Jan 20 15:28:37at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-20T15:28:37.1476383Z Jan 20 15:28:37at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-20T15:28:37.1477030Z Jan 20 15:28:37at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-20T15:28:37.1477670Z Jan 20 15:28:37at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
> 2022-01-20T15:28:37.1478388Z Jan 20 15:28:37at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18394: [FLINK-25520][Table SQL/API] Implement "ALTER TABLE ... COMPACT" SQL

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18394:
URL: https://github.com/apache/flink/pull/18394#issuecomment-1015323011


   
   ## CI report:
   
   * 7da11c60c656bfab79cf3ae76bc56cc729ae24a6 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29857)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18405: [FLINK-25683][streaming-java] wrong result if table transfrom to Data…

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18405:
URL: https://github.com/apache/flink/pull/18405#issuecomment-1017071379


   
   ## CI report:
   
   * cb55e79c59c1d03f8aa3c883a97d55a004a28860 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29858)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18438: [hotfix][filesystem] Fix the typo in InProgressFileWriter

2022-01-21 Thread GitBox


flinkbot commented on pull request #18438:
URL: https://github.com/apache/flink/pull/18438#issuecomment-1018343137


   
   ## CI report:
   
   * 4265d8adf7ac94568ba86adf9303300696ec6424 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] afedulov commented on a change in pull request #17598: [FLINK-24703][connectors][formats] Add CSV format support for filesystem based on StreamFormat and BulkWriter interfaces.

2022-01-21 Thread GitBox


afedulov commented on a change in pull request #17598:
URL: https://github.com/apache/flink/pull/17598#discussion_r789503305



##
File path: flink-formats/flink-csv/pom.xml
##
@@ -77,6 +77,14 @@ under the License.
 

 
+   

Review comment:
   Good point, it is a leftover from an earlier testing approach, not 
needed anymore.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] vahmed-hamdy closed pull request #17907: [FLINK-24905][connectors/kinesis] Adding support for Kinesis DataStream sink as kinesis table connector sink.

2022-01-21 Thread GitBox


vahmed-hamdy closed pull request #17907:
URL: https://github.com/apache/flink/pull/17907


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18169:
URL: https://github.com/apache/flink/pull/18169#issuecomment-998925832


   
   ## CI report:
   
   * 35011a0fb8ca36b38d854e88a2937357b8736f4d UNKNOWN
   * fc7193f9336b272ced363c100517ad4f4f793804 UNKNOWN
   * 08fa0c4904a5a3712b896210a4d2e224e3f1e455 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29819)
 
   * 418f28ea50f5f1e8f72eab6970a3ae26d3d41ad9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18268: [FLINK-14902][connector] Supports jdbc async lookup join

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18268:
URL: https://github.com/apache/flink/pull/18268#issuecomment-1005479356


   
   ## CI report:
   
   * fe7d298f1ac142a2fa53241df09eaf93b44ec4be Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29290)
 
   * f64a2fe9f7c1ca9a041641256dc09d00253ce837 UNKNOWN
   * 3bdbb98c9d7831eb50e16102740b7193c5376646 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29873)
 
   * 259aeca4d91fc0adfaef93fdba9aa872a866a3c8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18348: [FLINK-24905][connectors/kinesis] Adding support for Kinesis DataStream sink as kinesis table connector sink.

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18348:
URL: https://github.com/apache/flink/pull/18348#issuecomment-1011977937


   
   ## CI report:
   
   * 6a5494ae1f78bae86a9a5612d4a0e6ab973005db Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29810)
 
   * e924187c8b00f993aed7d56774f2ee168fe750ea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29877)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18437: [FLINK-25712][connector/tests] Merge flink-connector-testing into flink-connector-test-utils

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18437:
URL: https://github.com/apache/flink/pull/18437#issuecomment-1018174136


   
   ## CI report:
   
   * be3d03d337bb7358ee949445f0530a73d02c43dc Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29854)
 
   * 049a0ebc6535291170b03e739521344d54809682 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29863)
 
   * 66184529bb0e4f34be6b7e3755d06cd50939f894 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29870)
 
   * 5115a30fdef014a1457212bdec8835166738bced UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18438: [hotfix][filesystem] Fix the typo in InProgressFileWriter

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18438:
URL: https://github.com/apache/flink/pull/18438#issuecomment-1018343137


   
   ## CI report:
   
   * 4265d8adf7ac94568ba86adf9303300696ec6424 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29880)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25098) Jobmanager CrashLoopBackOff in HA configuration

2022-01-21 Thread Enrique Lacal (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enrique Lacal updated FLINK-25098:
--
Attachment: JM-FlinkException-checkpointHA.txt

> Jobmanager CrashLoopBackOff in HA configuration
> ---
>
> Key: FLINK-25098
> URL: https://issues.apache.org/jira/browse/FLINK-25098
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.2, 1.13.3
> Environment: Reproduced with:
> * Persistent jobs storage provided by the rocks-cephfs storage class.
> * OpenShift 4.9.5.
>Reporter: Adrian Vasiliu
>Priority: Critical
> Attachments: JM-FlinkException-checkpointHA.txt, 
> iaf-insights-engine--7fc4-eve-29ee-ep-jobmanager-1-jobmanager.log, 
> jm-flink-ha-jobmanager-log.txt, jm-flink-ha-tls-proxy-log.txt
>
>
> In a Kubernetes deployment of Flink 1.13.2 (also reproduced with Flink 
> 1.13.3), turning to Flink HA by using 3 replicas of the jobmanager leads to 
> CrashLoopBackoff for all replicas.
> Attaching the full logs of the {{jobmanager}} and {{tls-proxy}} containers of 
> jobmanager pod:
> [^jm-flink-ha-jobmanager-log.txt]
> [^jm-flink-ha-tls-proxy-log.txt]
> Reproduced with:
>  * Persistent jobs storage provided by the {{rocks-cephfs}} storage class 
> (shared by all replicas - ReadWriteMany) and mount path set via 
> {{{}high-availability.storageDir: file///{}}}.
>  * OpenShift 4.9.5 and also 4.8.x - reproduced in several clusters, it's not 
> a "one-shot" trouble.
> Remarks:
>  * This is a follow-up of 
> https://issues.apache.org/jira/browse/FLINK-22014?focusedCommentId=17450524&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17450524.
>  
>  * Picked Critical severity as HA is critical for our product.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] LadyForest commented on a change in pull request #18394: [FLINK-25520][Table SQL/API] Implement "ALTER TABLE ... COMPACT" SQL

2022-01-21 Thread GitBox


LadyForest commented on a change in pull request #18394:
URL: https://github.com/apache/flink/pull/18394#discussion_r789511549



##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/utils/TableTestBase.scala
##
@@ -47,21 +53,30 @@ import org.apache.flink.table.expressions.Expression
 import org.apache.flink.table.factories.{FactoryUtil, PlannerFactoryUtil, 
StreamTableSourceFactory}
 import org.apache.flink.table.functions._
 import org.apache.flink.table.module.ModuleManager
-import org.apache.flink.table.operations.{ModifyOperation, Operation, 
QueryOperation, SinkModifyOperation}
+import org.apache.flink.table.operations.ModifyOperation
+import org.apache.flink.table.operations.Operation
+import org.apache.flink.table.operations.QueryOperation
+import org.apache.flink.table.operations.SinkModifyOperation

Review comment:
   I got the reason, so magic..
   
![image](https://user-images.githubusercontent.com/55568005/150505930-1c6829ac-3b56-410d-8dfc-b9b0d25f84be.png)
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25098) Jobmanager CrashLoopBackOff in HA configuration

2022-01-21 Thread Enrique Lacal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479958#comment-17479958
 ] 

Enrique Lacal commented on FLINK-25098:
---

Hi [~trohrmann] , 

I didn't manage to replicate the above error and even tried with S3 back in 
December and left it running for a long period of time and it worked.

Yes, we use StatefulSets for deploying the Flink JMs.

We have found a similar issue to the above that is replicated constantly, here 
are the logs [^JM-FlinkException-checkpointHA.txt] . We have a manual 
workaround of deleting the affected HA ConfigMap which points to this 
checkpoint, but this is not feasible in a production environment. Would really 
appreciate any thoughts on this, and what sort of solution we could come to. 
Let me know if you need any more information, I'm trying to get the logs before 
this occurred.

Thanks,
Enrique

> Jobmanager CrashLoopBackOff in HA configuration
> ---
>
> Key: FLINK-25098
> URL: https://issues.apache.org/jira/browse/FLINK-25098
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.2, 1.13.3
> Environment: Reproduced with:
> * Persistent jobs storage provided by the rocks-cephfs storage class.
> * OpenShift 4.9.5.
>Reporter: Adrian Vasiliu
>Priority: Critical
> Attachments: JM-FlinkException-checkpointHA.txt, 
> iaf-insights-engine--7fc4-eve-29ee-ep-jobmanager-1-jobmanager.log, 
> jm-flink-ha-jobmanager-log.txt, jm-flink-ha-tls-proxy-log.txt
>
>
> In a Kubernetes deployment of Flink 1.13.2 (also reproduced with Flink 
> 1.13.3), turning to Flink HA by using 3 replicas of the jobmanager leads to 
> CrashLoopBackoff for all replicas.
> Attaching the full logs of the {{jobmanager}} and {{tls-proxy}} containers of 
> jobmanager pod:
> [^jm-flink-ha-jobmanager-log.txt]
> [^jm-flink-ha-tls-proxy-log.txt]
> Reproduced with:
>  * Persistent jobs storage provided by the {{rocks-cephfs}} storage class 
> (shared by all replicas - ReadWriteMany) and mount path set via 
> {{{}high-availability.storageDir: file///{}}}.
>  * OpenShift 4.9.5 and also 4.8.x - reproduced in several clusters, it's not 
> a "one-shot" trouble.
> Remarks:
>  * This is a follow-up of 
> https://issues.apache.org/jira/browse/FLINK-22014?focusedCommentId=17450524&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17450524.
>  
>  * Picked Critical severity as HA is critical for our product.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18268: [FLINK-14902][connector] Supports jdbc async lookup join

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18268:
URL: https://github.com/apache/flink/pull/18268#issuecomment-1005479356


   
   ## CI report:
   
   * fe7d298f1ac142a2fa53241df09eaf93b44ec4be Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29290)
 
   * f64a2fe9f7c1ca9a041641256dc09d00253ce837 UNKNOWN
   * 3bdbb98c9d7831eb50e16102740b7193c5376646 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29873)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18169:
URL: https://github.com/apache/flink/pull/18169#issuecomment-998925832


   
   ## CI report:
   
   * 35011a0fb8ca36b38d854e88a2937357b8736f4d UNKNOWN
   * fc7193f9336b272ced363c100517ad4f4f793804 UNKNOWN
   * 08fa0c4904a5a3712b896210a4d2e224e3f1e455 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29819)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18348: [FLINK-24905][connectors/kinesis] Adding support for Kinesis DataStream sink as kinesis table connector sink.

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18348:
URL: https://github.com/apache/flink/pull/18348#issuecomment-1011977937


   
   ## CI report:
   
   * 6a5494ae1f78bae86a9a5612d4a0e6ab973005db Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29810)
 
   * e924187c8b00f993aed7d56774f2ee168fe750ea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29877)
 
   * 6ed3a6b636531103bc1429a35084ca3d0a1f8616 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18437: [FLINK-25712][connector/tests] Merge flink-connector-testing into flink-connector-test-utils

2022-01-21 Thread GitBox


flinkbot edited a comment on pull request #18437:
URL: https://github.com/apache/flink/pull/18437#issuecomment-1018174136


   
   ## CI report:
   
   * be3d03d337bb7358ee949445f0530a73d02c43dc Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29854)
 
   * 049a0ebc6535291170b03e739521344d54809682 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29863)
 
   * 66184529bb0e4f34be6b7e3755d06cd50939f894 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29870)
 
   * 5115a30fdef014a1457212bdec8835166738bced Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29881)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   >