[jira] [Closed] (FLINK-23148) flink-mesos does not build with scala 2.12

2021-06-24 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-23148.

Fix Version/s: (was: 1.14.0)
   Resolution: Won't Fix

> flink-mesos does not build with scala 2.12
> --
>
> Key: FLINK-23148
> URL: https://issues.apache.org/jira/browse/FLINK-23148
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Deployment / Mesos
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Priority: Major
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19500&view=logs&j=ed6509f5-1153-558c-557a-5ee0afbcdf24&t=241b1e5e-1a8e-5e6a-469a-a9b8cad87065&l=13494
> {code}
> +-org.apache.flink:flink-mesos_2.12:1.14-SNAPSHOT
>   +-org.apache.flink:flink-clients_2.12:1.14-SNAPSHOT
> +-org.apache.flink:flink-streaming-java_2.12:1.14-SNAPSHOT
>   +-org.apache.flink:flink-scala_2.12:1.14-SNAPSHOT
> +-org.scala-lang:scala-compiler:2.12.7
>   +-org.scala-lang.modules:scala-xml_2.12:1.0.6
> and
> +-org.apache.flink:flink-mesos_2.12:1.14-SNAPSHOT
>   +-org.scalatest:scalatest_2.12:3.0.0
> +-org.scala-lang.modules:scala-xml_2.12:1.0.5
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-23118) Drop Mesos support

2021-06-24 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-23118.

Resolution: Fixed

master: e8b7f3dc73ff8acfc3f4a3a7757798bcc8b80f63

> Drop Mesos support
> --
>
> Key: FLINK-23118
> URL: https://issues.apache.org/jira/browse/FLINK-23118
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Mesos
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Following the discussion on the 
> [ML|https://lists.apache.org/thread.html/rd7bf0dabe2d75adb9f97a1879638711d04cfce0774d31b033acae0b8%40%3Cdev.flink.apache.org%3E]
>  , remove Mesos support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol merged pull request #16249: [FLINK-23118] Drop Mesos support

2021-06-24 Thread GitBox


zentol merged pull request #16249:
URL: https://github.com/apache/flink/pull/16249


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16287: [FLINK-23066] Introduce TableEnvironment#from(TableDescriptor)

2021-06-24 Thread GitBox


flinkbot commented on pull request #16287:
URL: https://github.com/apache/flink/pull/16287#issuecomment-868270704


   
   ## CI report:
   
   * 850b58ee58b7c1ed9906e2397da59d779a10e83a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16286: [FLINK-21445] Refactors the PackagedProgramRetriever implementation and adds configuration to PackagedProgram

2021-06-24 Thread GitBox


flinkbot commented on pull request #16286:
URL: https://github.com/apache/flink/pull/16286#issuecomment-868270639


   
   ## CI report:
   
   * 4cbaee738f213985ae8d6850b36005e0ebafd5bf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #16286: [FLINK-21445] Refactors the PackagedProgramRetriever implementation and adds configuration to PackagedProgram

2021-06-24 Thread GitBox


SteNicholas commented on a change in pull request #16286:
URL: https://github.com/apache/flink/pull/16286#discussion_r658517140



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/program/PackagedProgramRetrieverImpl.java
##
@@ -0,0 +1,203 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.client.program;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import 
org.apache.flink.client.deployment.application.EntryClassInformationProvider;
+import 
org.apache.flink.client.deployment.application.FromClasspathEntryClassInformationProvider;
+import 
org.apache.flink.client.deployment.application.FromJarEntryClassInformationProvider;
+import org.apache.flink.util.FileUtils;
+import org.apache.flink.util.FlinkException;
+import org.apache.flink.util.Preconditions;
+import org.apache.flink.util.function.FunctionUtils;
+
+import javax.annotation.Nullable;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URL;
+import java.nio.file.Path;
+import java.util.Collections;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/** {@code PackageProgramRetrieverImpl} implements {@link 
PackagedProgramRetriever}. */
+public class PackagedProgramRetrieverImpl implements PackagedProgramRetriever {
+
+private final EntryClassInformationProvider entryClassInformationProvider;
+private final String[] programArguments;
+private final List userClasspath;
+
+/**
+ * Creates a {@code PackageProgramRetrieverImpl} with the given parameters.
+ *
+ * @param userLibDir The user library directory that is used for 
generating the user classpath
+ * if specified. The system classpath is used if not specified.
+ * @param jobClassName The job class that will be used if specified. The 
classpath is used to
+ * detect any main class if not specified.
+ * @param programArgs The program arguments.
+ * @return The {@code PackageProgramRetrieverImpl} that can be used to 
create a {@link
+ * PackagedProgram} instance.
+ * @throws FlinkException If something goes wrong during instantiation.
+ */
+public static PackagedProgramRetrieverImpl create(
+@Nullable File userLibDir, @Nullable String jobClassName, String[] 
programArgs)
+throws FlinkException {
+return create(userLibDir, null, jobClassName, programArgs);
+}
+
+/**
+ * Creates a {@code PackageProgramRetrieverImpl} with the given parameters.
+ *
+ * @param userLibDir The user library directory that is used for 
generating the user classpath
+ * if specified. The system classpath is used if not specified.
+ * @param jarFile The jar archive used expecting to have the job class 
included.
+ * @param jobClassName The job class that will be used if specified. The 
classpath is used to
+ * detect any main class if not specified.
+ * @param programArgs The program arguments.
+ * @return The {@code PackageProgramRetrieverImpl} that can be used to 
create a {@link
+ * PackagedProgram} instance.
+ * @throws FlinkException If something goes wrong during instantiation.
+ */
+public static PackagedProgramRetrieverImpl create(
+@Nullable File userLibDir,
+@Nullable File jarFile,
+@Nullable String jobClassName,
+String[] programArgs)

Review comment:
   Does this need to add `Configuration` parameter?

##
File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/entrypoint/YarnApplicationClusterEntryPoint.java
##
@@ -129,16 +129,12 @@ private static PackagedProgramRetriever 
getPackagedProgramRetriever(
 final Configuration configuration,
 final String[] programArguments,
 @Nullable final String jobClassName)
-throws IOException {
+throws IOException, FlinkException {
 
 final File userLibDir = 
YarnEntrypointUtils.getUsrLibDir(configuration).orElse(null);
 final File userApplicationJar = getUserApplicationJar(userLibDir, 
configuration);
-final ClassPathPackagedProgramRetriever.Builder retrieverBuilder =
-

[GitHub] [flink] flinkbot edited a comment on pull request #16285: [FLINK-23070][table-api-java] Introduce TableEnvironment#createTable

2021-06-24 Thread GitBox


flinkbot edited a comment on pull request #16285:
URL: https://github.com/apache/flink/pull/16285#issuecomment-868253867


   
   ## CI report:
   
   * c3f475e4c4fcd8bbf0a7695c75630e444308eee1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19518)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16283: [FLINK-23149][table-code-splitter] Introduce the new Java code splitter module

2021-06-24 Thread GitBox


flinkbot edited a comment on pull request #16283:
URL: https://github.com/apache/flink/pull/16283#issuecomment-868177111


   
   ## CI report:
   
   * e4990b25ca06e124e3d84da5b2dd77580340975c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19509)
 
   * d98990b3a75763ec378cefa927c6880365876116 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19515)
 
   * f60176579d7b4518ba7f10124456258a58fec05e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16281: [FLINK-23129][docs] Document ApplicationMode limitations

2021-06-24 Thread GitBox


flinkbot edited a comment on pull request #16281:
URL: https://github.com/apache/flink/pull/16281#issuecomment-867656783


   
   ## CI report:
   
   * 63b4d23354b8bc5cbe251d892458af9ff9174ad1 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19486)
 
   * 05a59e1932026f50ca427688551fff0ad09bfbce UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16229: [FLINK-22464][tests] Fix OperatorCoordinator test which is stalling with AdaptiveScheduler

2021-06-24 Thread GitBox


flinkbot edited a comment on pull request #16229:
URL: https://github.com/apache/flink/pull/16229#issuecomment-865339185


   
   ## CI report:
   
   * 7417e5ac3d755c34335ac9e13da705f4726d119b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19493)
 
   * 310a787c5010119a80176ec3b084c4665cb06e43 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas closed pull request #15020: [FLINK-21445][kubernetes] Application mode does not set the configuration when building PackagedProgram

2021-06-24 Thread GitBox


SteNicholas closed pull request #15020:
URL: https://github.com/apache/flink/pull/15020


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #16261: [FLINK-23045][tests] Harden RunnablesTest by not relying on timeout

2021-06-24 Thread GitBox


rmetzger commented on pull request #16261:
URL: https://github.com/apache/flink/pull/16261#issuecomment-868266209


   nit: The variable name in `testExecutorService_uncaughtExceptionHandler` 
here seems to be a copy-paste error: `ExecutorService scheduledExecutorService 
= Executors.newSingleThreadExecutor(threadFactory);`
   
https://github.com/apache/flink/pull/16261/files#diff-89b4d3c1a4159b34cf2304a8b426107165bd493aa1c5daee70ff3ca6c561c906R52
   You could consider applying the boy scout rule here, but I would also 
understand if you don't address this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on a change in pull request #16261: [FLINK-23045][tests] Harden RunnablesTest by not relying on timeout

2021-06-24 Thread GitBox


rmetzger commented on a change in pull request #16261:
URL: https://github.com/apache/flink/pull/16261#discussion_r658512952



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/util/RunnablesTest.java
##
@@ -53,9 +55,9 @@ public void testExecutorService_uncaughtExceptionHandler() 
throws InterruptedExc
 () -> {
 throw new RuntimeException("foo");
 });
-Assert.assertTrue(
-"Expected handler to be called.",
-handlerCalled.await(TIMEOUT_MS, TimeUnit.MILLISECONDS));
+
+// expect handler to be called
+handlerCalled.await();

Review comment:
   Thanks. I wasn't aware of this behavior of the 
`ScheduledExecutorService` and I didn't read the `Expected handler not to be 
called.` message of the assertion properly.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-23153) Benchmark not compiling

2021-06-24 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-23153.

Resolution: Fixed

Fixed in b21b5ba92a88fef5bb4d96bb7ae78fafd9a0a590

> Benchmark not compiling
> ---
>
> Key: FLINK-23153
> URL: https://issues.apache.org/jira/browse/FLINK-23153
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.14.0
>Reporter: Zhilong Hong
>Assignee: Zhilong Hong
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> In FLINK-23085, FutureUtils is moved from flink-runtime to flink-core. The 
> reference in flink-benchmark should also be changed. The reference is located 
> at: org/apache/flink/benchmark/operators/RecordSource.java.
> The travis CI is broken at this moment: 
> https://travis-ci.com/github/apache/flink-benchmarks/builds/230813827#L2026



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-benchmarks] dawidwys merged pull request #18: [FLINK-23153] Fix the reference of FutureUtils in RecordSource

2021-06-24 Thread GitBox


dawidwys merged pull request #18:
URL: https://github.com/apache/flink-benchmarks/pull/18


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23155) Streaming File Sink s3 end-to-end test fail on azure

2021-06-24 Thread Xintong Song (Jira)
Xintong Song created FLINK-23155:


 Summary: Streaming File Sink s3 end-to-end test fail on azure
 Key: FLINK-23155
 URL: https://issues.apache.org/jira/browse/FLINK-23155
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.13.1
Reporter: Xintong Song
 Fix For: 1.13.2


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19511&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=19624



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-benchmarks] dawidwys commented on pull request #18: [FLINK-23153] Fix the reference of FutureUtils in RecordSource

2021-06-24 Thread GitBox


dawidwys commented on pull request #18:
URL: https://github.com/apache/flink-benchmarks/pull/18#issuecomment-868263354


   LGTM, merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on pull request #15055: [FLINK-21086][runtime][checkpoint] Support checkpoint alignment with finished input channels

2021-06-24 Thread GitBox


dawidwys commented on pull request #15055:
URL: https://github.com/apache/flink/pull/15055#issuecomment-868262596


   Regarding the 3 modifications, I agree with 1. and 3., the second one should 
be implemented already imo.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on pull request #16070: [FLINK-23028][docs] Improve documentation for pages of SQL.

2021-06-24 Thread GitBox


RocMarshal commented on pull request #16070:
URL: https://github.com/apache/flink/pull/16070#issuecomment-868259581


   Hi, @alpinegizmo @wuchong ,Could you help me to confirm and merge the PR ? 
Thank you so much for your help.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Myasuka commented on pull request #16177: [FLINK-22528][docs] Document latency tracking metrics for state accesses

2021-06-24 Thread GitBox


Myasuka commented on pull request #16177:
URL: https://github.com/apache/flink/pull/16177#issuecomment-868258295


   @rkhachatryan , would you please take a look at this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23153) Benchmark not compiling

2021-06-24 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369262#comment-17369262
 ] 

Zhu Zhu commented on FLINK-23153:
-

Thanks for reporting this issue [~Thesharing]. I have assigned you the ticket.

> Benchmark not compiling
> ---
>
> Key: FLINK-23153
> URL: https://issues.apache.org/jira/browse/FLINK-23153
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.14.0
>Reporter: Zhilong Hong
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> In FLINK-23085, FutureUtils is moved from flink-runtime to flink-core. The 
> reference in flink-benchmark should also be changed. The reference is located 
> at: org/apache/flink/benchmark/operators/RecordSource.java.
> The travis CI is broken at this moment: 
> https://travis-ci.com/github/apache/flink-benchmarks/builds/230813827#L2026



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-23153) Benchmark not compiling

2021-06-24 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-23153:
---

Assignee: Zhilong Hong

> Benchmark not compiling
> ---
>
> Key: FLINK-23153
> URL: https://issues.apache.org/jira/browse/FLINK-23153
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.14.0
>Reporter: Zhilong Hong
>Assignee: Zhilong Hong
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> In FLINK-23085, FutureUtils is moved from flink-runtime to flink-core. The 
> reference in flink-benchmark should also be changed. The reference is located 
> at: org/apache/flink/benchmark/operators/RecordSource.java.
> The travis CI is broken at this moment: 
> https://travis-ci.com/github/apache/flink-benchmarks/builds/230813827#L2026



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #16286: [FLINK-21445] Refactors the PackagedProgramRetriever implementation and adds configuration to PackagedProgram

2021-06-24 Thread GitBox


flinkbot commented on pull request #16286:
URL: https://github.com/apache/flink/pull/16286#issuecomment-868257439


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4cbaee738f213985ae8d6850b36005e0ebafd5bf (Fri Jun 25 
06:26:39 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16287: [FLINK-23066] Introduce TableEnvironment#from(TableDescriptor)

2021-06-24 Thread GitBox


flinkbot commented on pull request #16287:
URL: https://github.com/apache/flink/pull/16287#issuecomment-868257423


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 94b3fb08fa63562cf96a7dfbc308d3ee231176a9 (Fri Jun 25 
06:26:37 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #16283: [FLINK-23149][table-code-splitter] Introduce the new Java code splitter module

2021-06-24 Thread GitBox


JingsongLi commented on a change in pull request #16283:
URL: https://github.com/apache/flink/pull/16283#discussion_r658442269



##
File path: flink-table/flink-table-code-splitter/pom.xml
##
@@ -0,0 +1,91 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+
+   4.0.0
+
+   
+   flink-table
+   org.apache.flink
+   1.14-SNAPSHOT
+   ..
+   
+
+   flink-table-code-splitter
+   Flink : Table : Code Splitter 
+   
+   This module contains a tool to split generated Java code
+   so that each method does not exceed the limit of 64KB.
+   Antlr grammar files are copied from the official antlr 
repository

Review comment:
   Maybe we should create a NOTICE file? CC experts @rmetzger @zentol 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23154) Fix Python PackagedProgramRetrieverImplTest implementations

2021-06-24 Thread Matthias (Jira)
Matthias created FLINK-23154:


 Summary: Fix Python PackagedProgramRetrieverImplTest 
implementations
 Key: FLINK-23154
 URL: https://issues.apache.org/jira/browse/FLINK-23154
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Reporter: Matthias


We refactored the {{PackagedProgramRetriever}} class hierarchy as part of 
FLINK-21445. As part of this, I added some tests for the Python path but didn't 
look to deep into how to set up the flink python opt folder within the test 
infrastructure. Instead, I added {{Assume}} calls.

This ticket is about removing these {{Assume}} calls and implementing the tests 
properly.

It affects testss in {{FromJarEntryClassInformationProviderTest}} and 
{{PackagedProgramRetrieverImplTest}}.

--

The corresponding code changes are still subject to change and are currently 
under review in a [draft PR for 
FLINK-21445|https://github.com/apache/flink/pull/16286].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22818) IgnoreInFlightDataITCase fails on azure

2021-06-24 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369260#comment-17369260
 ] 

Xintong Song commented on FLINK-22818:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19504&view=logs&j=a57e0635-3fad-5b08-57c7-a4142d7d6fa9&t=5360d54c-8d94-5d85-304e-a89267eb785a&l=9503

> IgnoreInFlightDataITCase fails on azure
> ---
>
> Key: FLINK-22818
> URL: https://issues.apache.org/jira/browse/FLINK-22818
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Assignee: Anton Kalashnikov
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18465&view=logs&j=a549b384-c55a-52c0-c451-00e0477ab6db&t=81f2da51-a161-54c7-5b84-6001fed26530&l=9807
> {code}
> May 31 22:28:49 [ERROR] Failures: 
> May 31 22:28:49 [ERROR]   
> IgnoreInFlightDataITCase.testIgnoreInFlightDataDuringRecovery:101 
> May 31 22:28:49 Expected: a value less than <57464560L>
> May 31 22:28:49  but: <57464560L> was equal to <57464560L>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23066) Implement TableEnvironment#from

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-23066:
---
Labels: pull-request-available  (was: )

> Implement TableEnvironment#from
> ---
>
> Key: FLINK-23066
> URL: https://issues.apache.org/jira/browse/FLINK-23066
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Ingo Bürk
>Assignee: Ingo Bürk
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Airblader opened a new pull request #16287: [FLINK-23066] Introduce TableEnvironment#from(TableDescriptor)

2021-06-24 Thread GitBox


Airblader opened a new pull request #16287:
URL: https://github.com/apache/flink/pull/16287


   ## What is the purpose of the change
   
   This introduces `TableEnvironment#from(TableDescriptor)` as part of FLIP-129.
   
   Under the hood it is essentially a shortcut of registering a temporary table 
using the previously introduced `TableEnvironment#createTemporaryTable(String, 
TableDescriptor)` and then deferring to the existing `from(String)` method. For 
this we need to choose some unique name. Note that this name must not just be 
unique per descriptor but per registration, as the same descriptor may be used 
multiple times.
   
   This PR relies on JavaDoc for documentation. More extensive documentation 
will be added as part of FLINK-23116.
   
   ## Brief change log
   
   * Added `TableEnvironment#from(TableDescriptor)`.
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   * `CalcITCase#testTableFromDescriptor`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? JavaDocs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on pull request #15055: [FLINK-21086][runtime][checkpoint] Support checkpoint alignment with finished input channels

2021-06-24 Thread GitBox


gaoyunhaii commented on pull request #15055:
URL: https://github.com/apache/flink/pull/15055#issuecomment-868255975


   Hi @dawidwys very thanks for the effective review! The current modification 
in the `CheckpointBarrierHandler` is indeed for tracking the pending barriers 
and received barriers for each channel, and not modify the logic regarding the 
alignment time metric. The current alignment time is indeed not reasonable for 
`CheckpointBarrierTracker` and I also agree with that we could deal with it 
separately.
   
   Regarding the process of `EndOfPartitionEvent` in the 
`CheckpointBarrierHandler`,  I also agree with that it would be more safe to 
keep strict state for blocked and priority channels, and we might split the 
implementation for `SingleCheckpointBarrierHandler` and `CheckpointTracker`.  
The only remaining concern is about the complexity of direct implementation of 
processing `EndOfPartitionEvent`. With the direct implementation we might need 
to check carefully of the consistency with processing barrier, for example, for 
aligned checkpoints with timeout, perhaps we might also need to check the 
timeout when processing `EndOfPartitionEvent`? 
   
   However, viewing `EndOfPartitionEvent` as `Barriers for each input channel 
followed by an end notification` was indeed conflict with the point 1, since 
when process the virtually added last checkpoint barrier the expected logic is 
different from normal barriers for block/unblock & prioritize channels. And I 
also not very tend to add additional flag to 
`BarrierHandlerState#barrierReceived()` since it would make the logic fragile. 
Thus overall speaking I also agree with that the commit in 
https://github.com/dawidwys/flink/commit/c1a4867cd090997bd504f8e201324214bdebece3
 would be more preferred~ I'll update the PR with this method, perhaps with 
some implementation modification:
   
   1. Consider timeout for `EndOfPartitionEvent` for alternating checkpoints.
   2. For CheckpointBarrierTracker, we might need to find the largest 
checkpoint that is aligned when received EndOfPartition. 
   3. we might also need to maintain the set of aligned channels for 
`SingleBarrierCheckpointHandler`, otherwise there might be problem if we 
received both `barrier` and `EndOfPartitionEvent` from one channel during 
checkpoint. For example, suppose we have 3 channels, if
   
   ```
   a. Received barrier@#1, barrierReceived = 1, openChannels = 3
   b. Received barrier@#2, barrierReceived = 2, openChannels = 3
   c. Received EndOfPartitionEvent@#1, barrier received = 2, open channels = 2, 
triggered worngly. 
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] XComp opened a new pull request #16286: [FLINK-21445] Refactors the PackagedProgramRetriever implementation and adds configuration to PackagedProgram

2021-06-24 Thread GitBox


XComp opened a new pull request #16286:
URL: https://github.com/apache/flink/pull/16286


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #16281: [FLINK-23129][docs] Document ApplicationMode limitations

2021-06-24 Thread GitBox


rmetzger commented on pull request #16281:
URL: https://github.com/apache/flink/pull/16281#issuecomment-868255611


   Thanks a lot for your review. I addressed all your comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-benchmarks] Thesharing edited a comment on pull request #18: [FLINK-23153] Fix the reference of FutureUtils in RecordSource

2021-06-24 Thread GitBox


Thesharing edited a comment on pull request #18:
URL: https://github.com/apache/flink-benchmarks/pull/18#issuecomment-868255057


   I had an offline discussion with @zhuzhurk. We both agree that the reference 
of FutureUtils here could be removed by replacing 
`FutureUtils.completedVoidFuture()` with 
`CompletableFuture.completedFuture(null)`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-benchmarks] Thesharing edited a comment on pull request #18: [FLINK-23153] Fix the reference of FutureUtils in RecordSource

2021-06-24 Thread GitBox


Thesharing edited a comment on pull request #18:
URL: https://github.com/apache/flink-benchmarks/pull/18#issuecomment-868255057


   I had an offline discussion with @zhuzhurk. We both agree that the reference 
of FutureUtils here could be removed by replace 
`FutureUtils.completedVoidFuture()` with 
`CompletableFuture.completedFuture(null)`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-benchmarks] Thesharing commented on pull request #18: [FLINK-23153] Fix the reference of FutureUtils in RecordSource

2021-06-24 Thread GitBox


Thesharing commented on pull request #18:
URL: https://github.com/apache/flink-benchmarks/pull/18#issuecomment-868255057


   After offline discussion with @zhuzhurk, we both agree that the reference of 
FutureUtils here could be removed by replace 
`FutureUtils.completedVoidFuture()` with 
`CompletableFuture.completedFuture(null)`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23153) Benchmark not compiling

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-23153:
---
Labels: pull-request-available  (was: )

> Benchmark not compiling
> ---
>
> Key: FLINK-23153
> URL: https://issues.apache.org/jira/browse/FLINK-23153
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.14.0
>Reporter: Zhilong Hong
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> In FLINK-23085, FutureUtils is moved from flink-runtime to flink-core. The 
> reference in flink-benchmark should also be changed. The reference is located 
> at: org/apache/flink/benchmark/operators/RecordSource.java.
> The travis CI is broken at this moment: 
> https://travis-ci.com/github/apache/flink-benchmarks/builds/230813827#L2026



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #16285: [FLINK-23070][table-api-java] Introduce TableEnvironment#createTable

2021-06-24 Thread GitBox


flinkbot commented on pull request #16285:
URL: https://github.com/apache/flink/pull/16285#issuecomment-868253867


   
   ## CI report:
   
   * c3f475e4c4fcd8bbf0a7695c75630e444308eee1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16258: [FLINK-23120][python] Fix ByteArrayWrapperSerializer.serialize to use writeInt to serialize the length

2021-06-24 Thread GitBox


flinkbot edited a comment on pull request #16258:
URL: https://github.com/apache/flink/pull/16258#issuecomment-866824620


   
   ## CI report:
   
   * 58636681dabbdae5ef9993b87d29ab25f99efc89 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19508)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19443)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on pull request #16173: [FLINK-15031][runtime] Calculate required shuffle memory before allocating slots if resources are specified

2021-06-24 Thread GitBox


KarmaGYZ commented on pull request #16173:
URL: https://github.com/apache/flink/pull/16173#issuecomment-868253073


   Thanks for the PR, @jinxing64 . Would you like to rebase on the lastest 
`master` and resolve the conflict? I think @zhuzhurk may help to review it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-benchmarks] Thesharing opened a new pull request #18: [FLINK-23153] Fix the reference of FutureUtils in RecordSource

2021-06-24 Thread GitBox


Thesharing opened a new pull request #18:
URL: https://github.com/apache/flink-benchmarks/pull/18


   In FLINK-23085, FutureUtils is moved from flink-runtime to flink-core. The 
reference in flink-benchmark should also be changed. The reference is located 
at: org/apache/flink/benchmark/operators/RecordSource.java.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23153) Benchmark not compiling

2021-06-24 Thread Zhilong Hong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhilong Hong updated FLINK-23153:
-
Description: 
In FLINK-23085, FutureUtils is moved from flink-runtime to flink-core. The 
reference in flink-benchmark should also be changed. The reference is located 
at: org/apache/flink/benchmark/operators/RecordSource.java.

The travis CI is broken at this moment: 
https://travis-ci.com/github/apache/flink-benchmarks/builds/230813827#L2026

  was:
In FLINK-23085, FutureUtils is moved from flink-runtime to flink-core. The 
reference in flink-benchmark should also be changed. One known reference is 
located at: org/apache/flink/benchmark/operators/RecordSource.java.

The travis CI is broken at this moment: 
https://travis-ci.com/github/apache/flink-benchmarks/builds/230813827#L2026


> Benchmark not compiling
> ---
>
> Key: FLINK-23153
> URL: https://issues.apache.org/jira/browse/FLINK-23153
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.14.0
>Reporter: Zhilong Hong
>Priority: Blocker
> Fix For: 1.14.0
>
>
> In FLINK-23085, FutureUtils is moved from flink-runtime to flink-core. The 
> reference in flink-benchmark should also be changed. The reference is located 
> at: org/apache/flink/benchmark/operators/RecordSource.java.
> The travis CI is broken at this moment: 
> https://travis-ci.com/github/apache/flink-benchmarks/builds/230813827#L2026



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23153) Benchmark not compiling

2021-06-24 Thread Zhilong Hong (Jira)
Zhilong Hong created FLINK-23153:


 Summary: Benchmark not compiling
 Key: FLINK-23153
 URL: https://issues.apache.org/jira/browse/FLINK-23153
 Project: Flink
  Issue Type: Bug
  Components: Benchmarks
Affects Versions: 1.14.0
Reporter: Zhilong Hong
 Fix For: 1.14.0


In FLINK-23085, FutureUtils is moved from flink-runtime to flink-core. The 
reference in flink-benchmark should also be changed. One known reference is 
located at: org/apache/flink/benchmark/operators/RecordSource.java.

The travis CI is broken at this moment: 
https://travis-ci.com/github/apache/flink-benchmarks/builds/230813827#L2026



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23152) Flink - {"errors":["Service temporarily unavailable due to an ongoing leader election. Please refresh."]}

2021-06-24 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369252#comment-17369252
 ] 

Matthias commented on FLINK-23152:
--

Hi [~liu zhuang], thanks for sharing the logs (even though, it would be helpful 
to not share a screenshot but to share the actual text; you're screenshot is 
hard to read).

The error message you're seeing in the web UI is normal when electing a new 
leader. Based on your screenshot it looks like there is a 
{{NumberFormatException}} causing the error in your main method.

Another hint: It would be good to use the [user mailing 
list|https://flink.apache.org/community.html#mailing-lists] for these kind of 
questions.

> Flink - {"errors":["Service temporarily unavailable due to an ongoing leader 
> election. Please refresh."]}
> -
>
> Key: FLINK-23152
> URL: https://issues.apache.org/jira/browse/FLINK-23152
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.1
> Environment: Apache Hadoop Yarn version is 3.11
> Apache Zookeeper version is 3.5.5
>Reporter: liu zhuang
>Priority: Minor
> Attachments: jobmanager log.png
>
>
> Hi Team,
> I deployed  Flink  cluster on yarn-application mode.. JobManager HA with 
> Apache ZooKeeper to deploy .. I use the following command to submit the flink 
> job ,
>  
> flink run-application -t yarn-application \
> -c com.model \
> -Djobmanager.memory.process.size=1024m \
> -Dtaskmanager.memory.process.size=1024m \
> -Dyarn.provided.lib.dirs="hdfs:///user/flink/libs;hdfs:///user/flink/plugins" 
> \
> -Dyarn.application.name="test" \
> hdfs:///user/flink/jars/model-1.0.jar
>  
> But Flink Web UI is throwing an error saying
> "{"errors":["Service temporarily unavailable due to an ongoing leader 
> election. Please refresh."]}".
>  
> In the  zookeeper node ,this flink job is "[jobgraphs,leader,leaderlatch]" 
> {color:#FF}Attached is the jobmanager log of this flink job. At the same 
> time, when I use the Flink Per-job mode to submit the job, it can run 
> normally.{color}
> Please help to fix the issue with UI and HA.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23152) Flink - {"errors":["Service temporarily unavailable due to an ongoing leader election. Please refresh."]}

2021-06-24 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-23152:
-
Priority: Minor  (was: Major)

> Flink - {"errors":["Service temporarily unavailable due to an ongoing leader 
> election. Please refresh."]}
> -
>
> Key: FLINK-23152
> URL: https://issues.apache.org/jira/browse/FLINK-23152
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.1
> Environment: Apache Hadoop Yarn version is 3.11
> Apache Zookeeper version is 3.5.5
>Reporter: liu zhuang
>Priority: Minor
> Attachments: jobmanager log.png
>
>
> Hi Team,
> I deployed  Flink  cluster on yarn-application mode.. JobManager HA with 
> Apache ZooKeeper to deploy .. I use the following command to submit the flink 
> job ,
>  
> flink run-application -t yarn-application \
> -c com.model \
> -Djobmanager.memory.process.size=1024m \
> -Dtaskmanager.memory.process.size=1024m \
> -Dyarn.provided.lib.dirs="hdfs:///user/flink/libs;hdfs:///user/flink/plugins" 
> \
> -Dyarn.application.name="test" \
> hdfs:///user/flink/jars/model-1.0.jar
>  
> But Flink Web UI is throwing an error saying
> "{"errors":["Service temporarily unavailable due to an ongoing leader 
> election. Please refresh."]}".
>  
> In the  zookeeper node ,this flink job is "[jobgraphs,leader,leaderlatch]" 
> {color:#FF}Attached is the jobmanager log of this flink job. At the same 
> time, when I use the Flink Per-job mode to submit the job, it can run 
> normally.{color}
> Please help to fix the issue with UI and HA.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23147) ThreadPools can be poisoned by context class loaders

2021-06-24 Thread Arvid Heise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369251#comment-17369251
 ] 

Arvid Heise commented on FLINK-23147:
-

Hi Chesnay, I'm lacking a bit of knowledge to contribute to the discussion:
 * I was assuming that {{ExecutorThreadFactory}} is internal and is only used 
by the system.
 * I gathered that your intent is to extract the {{AkkaRpcService}} (the only 
akka related class that uses the factory) into a plugin?
 * If something like {{AkkaRpcService}} lives in a plugin, why do we want it to 
outlive the plugin life cycle? Couldn't we wait with closing the plugin until 
everything has been processed? Or discard any remaining message at closing the 
plugin?

In general, I don't feel comfortable with letting plugins use the flink CL as 
they seem fit. The main purpose is to encapsulate user-code with potentially 
conflicting libraries.

I could see that we add a special type of plugins that can break free of the 
restrictions that doesn't encapsulate user-code but system code extracted to 
live in a different CL. However, at this point, I don't see where the benefit 
is over just putting it into {{lib}} if a plugin's classes are loaded through 
flink CL anyways. Maybe, we rather need plugins that are just completely bound 
to the live cycle of the Flink cluster (aren't they anyways outliving jobs?).

> ThreadPools can be poisoned by context class loaders
> 
>
> Key: FLINK-23147
> URL: https://issues.apache.org/jira/browse/FLINK-23147
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.14.0
>
>
> Newly created threads in a thread pool inherit the context class loader (CCL) 
> of the currently running thread.
> For thread pools this is very problematic because the CCL is unlikely to be 
> reset at any point; not only can this leak another CL by accident, it can 
> also cause class loading issues, for example when using a {{ServiceLoader}} 
> because it relies on the CCL.
> With the scala-free runtime this for example means that if an actor system 
> threads schedules something into future thread pool of the JM then a new 
> thread is created which uses a plugin loader as a CCL. The plugin loaders are 
> quite restrictive and prevent the loading of 3rd-party dependencies; so if 
> the JM schedules something into the future thread pool which requires one of 
> these dependencies to be accessible then we're gambling as to whether this 
> dependency can actually be loaded in the end.
> Because it is difficult to ensure that we set the CCL correctly on all 
> transitions from akka to Flink land I suggest to add a safeguard to the 
> ExecutorThreadFactory to enforce that newly created threads are always 
> initialized with the CL that has loaded Flink.
> /cc [~arvid] [~sewen]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16283: [FLINK-23149][table-code-splitter] Introduce the new Java code splitter module

2021-06-24 Thread GitBox


flinkbot edited a comment on pull request #16283:
URL: https://github.com/apache/flink/pull/16283#issuecomment-868177111


   
   ## CI report:
   
   * e4990b25ca06e124e3d84da5b2dd77580340975c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19509)
 
   * d98990b3a75763ec378cefa927c6880365876116 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19515)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16285: [FLINK-23070][table-api-java] Introduce TableEnvironment#createTable

2021-06-24 Thread GitBox


flinkbot commented on pull request #16285:
URL: https://github.com/apache/flink/pull/16285#issuecomment-868222480


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c3f475e4c4fcd8bbf0a7695c75630e444308eee1 (Fri Jun 25 
05:49:35 UTC 2021)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-22470) The root cause of the exception encountered during compiling the job was not exposed to users in certain cases

2021-06-24 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17332843#comment-17332843
 ] 

Dian Fu edited comment on FLINK-22470 at 6/25/21, 5:45 AM:
---

Merged to
 - master via bef98cf0ba168dd4f0b2c8614d2179cec8b54ff8
 - release-1.13 via dfa6623f3d4a1fead2be88aa964ed69a1219d382
 - release-1.12 via fbd38b53197c244af5ed3af54cebdee75b393a1d
 - release-1.11 via 6eb6ca5fbf003da38bb0c104a6f31d8a70be4392


was (Author: dian.fu):
Merged to
- master via bef98cf0ba168dd4f0b2c8614d2179cec8b54ff8
- release-1.13 via dfa6623f3d4a1fead2be88aa964ed69a1219d382
- release-1.12 via fbd38b53197c244af5ed3af54cebdee75b393a1d

> The root cause of the exception encountered during compiling the job was not 
> exposed to users in certain cases
> --
>
> Key: FLINK-22470
> URL: https://issues.apache.org/jira/browse/FLINK-22470
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.13.1, 1.12.4
>
>
> For the following job:
> {code}
> def test():
> from pyflink.table import DataTypes, BatchTableEnvironment, 
> EnvironmentSettings
> env_settings = 
> EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
> table_env = 
> BatchTableEnvironment.create(environment_settings=env_settings)
> table_env \
> .get_config() \
> .get_configuration() \
> .set_string(
> "pipeline.jars",
> 
> "file:///Users/dianfu/code/src/alibaba/ververica-connectors/flink-sql-avro-1.12.0.jar"
> )
> table = table_env.from_elements(
> [('111', '222')],
> schema=DataTypes.ROW([
> DataTypes.FIELD('text', DataTypes.STRING()),
> DataTypes.FIELD('text1', DataTypes.STRING())
> ])
> )
> sink_ddl = f"""
> create table Results(
> a STRING,
> b STRING
> ) with (
> 'connector' = 'filesystem',
> 'path' = '/Users/dianfu/tmp/',
> 'format' = 'avro'
> )
> """
> table_env.execute_sql(sink_ddl)
> table.execute_insert("Results").wait()
> if __name__ == "__main__":
> test()
> {code}
> It throws the following exception:
> {code}
> pyflink.util.exceptions.TableException: Failed to execute sql
>at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:699)
>at 
> org.apache.flink.table.api.internal.TableImpl.executeInsert(TableImpl.java:572)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:498)
>at 
> org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
>at 
> org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
>at 
> org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
>at 
> org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
>at 
> org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
>at 
> org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
>at java.lang.Thread.run(Thread.java:748)
> Process finished with exit code 1
> {code}
> The root cause isn't exposed and it's difficult for users to figure out what 
> happens.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22470) The root cause of the exception encountered during compiling the job was not exposed to users in certain cases

2021-06-24 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-22470:

Fix Version/s: 1.11.4

> The root cause of the exception encountered during compiling the job was not 
> exposed to users in certain cases
> --
>
> Key: FLINK-22470
> URL: https://issues.apache.org/jira/browse/FLINK-22470
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.4, 1.14.0, 1.13.1, 1.12.4
>
>
> For the following job:
> {code}
> def test():
> from pyflink.table import DataTypes, BatchTableEnvironment, 
> EnvironmentSettings
> env_settings = 
> EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
> table_env = 
> BatchTableEnvironment.create(environment_settings=env_settings)
> table_env \
> .get_config() \
> .get_configuration() \
> .set_string(
> "pipeline.jars",
> 
> "file:///Users/dianfu/code/src/alibaba/ververica-connectors/flink-sql-avro-1.12.0.jar"
> )
> table = table_env.from_elements(
> [('111', '222')],
> schema=DataTypes.ROW([
> DataTypes.FIELD('text', DataTypes.STRING()),
> DataTypes.FIELD('text1', DataTypes.STRING())
> ])
> )
> sink_ddl = f"""
> create table Results(
> a STRING,
> b STRING
> ) with (
> 'connector' = 'filesystem',
> 'path' = '/Users/dianfu/tmp/',
> 'format' = 'avro'
> )
> """
> table_env.execute_sql(sink_ddl)
> table.execute_insert("Results").wait()
> if __name__ == "__main__":
> test()
> {code}
> It throws the following exception:
> {code}
> pyflink.util.exceptions.TableException: Failed to execute sql
>at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:699)
>at 
> org.apache.flink.table.api.internal.TableImpl.executeInsert(TableImpl.java:572)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:498)
>at 
> org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
>at 
> org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
>at 
> org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
>at 
> org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
>at 
> org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
>at 
> org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
>at java.lang.Thread.run(Thread.java:748)
> Process finished with exit code 1
> {code}
> The root cause isn't exposed and it's difficult for users to figure out what 
> happens.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] AHeise merged pull request #16277: [FLINK-22492][test] Use order-agnostic check in KinesisTableApiITCase.

2021-06-24 Thread GitBox


AHeise merged pull request #16277:
URL: https://github.com/apache/flink/pull/16277


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise commented on pull request #16277: [FLINK-22492][test] Use order-agnostic check in KinesisTableApiITCase.

2021-06-24 Thread GitBox


AHeise commented on pull request #16277:
URL: https://github.com/apache/flink/pull/16277#issuecomment-868220479


   Failure unrelated (FLINK-22818) merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23070) Implement TableEnvironment#createTable

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-23070:
---
Labels: pull-request-available  (was: )

> Implement TableEnvironment#createTable
> --
>
> Key: FLINK-23070
> URL: https://issues.apache.org/jira/browse/FLINK-23070
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Ingo Bürk
>Assignee: Ingo Bürk
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Airblader opened a new pull request #16285: [FLINK-23070][table-api-java] Introduce TableEnvironment#createTable

2021-06-24 Thread GitBox


Airblader opened a new pull request #16285:
URL: https://github.com/apache/flink/pull/16285


   ## What is the purpose of the change
   
   This implements `TableEnvironment#createTable(String, TableDescriptor)` as 
part of FLIP-129. It behaves the same as the previously introduced 
`#createTemporaryTable(String, TableDescriptor)` with the difference of 
actually going through the catalog to persist the table.
   
   ## Brief change log
   
   * Introduced new API method `TableEnvironment#createTable(String, 
TableDescriptor)`.
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   * `CalcITCase#testCreateTableFromDescriptor`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? docs + JavaDocs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16283: [FLINK-23149][table-code-splitter] Introduce the new Java code splitter module

2021-06-24 Thread GitBox


flinkbot edited a comment on pull request #16283:
URL: https://github.com/apache/flink/pull/16283#issuecomment-868177111


   
   ## CI report:
   
   * e4990b25ca06e124e3d84da5b2dd77580340975c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19509)
 
   * d98990b3a75763ec378cefa927c6880365876116 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-22147) Refactor Partition Discovery Logic in KafkaSourceEnumerator

2021-06-24 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin resolved FLINK-22147.
--
Fix Version/s: 1.14.0
   Resolution: Implemented

Merged to master. 
1418a1ddd025adb3b502b8d7a89d0f338aa40c29

> Refactor Partition Discovery Logic in KafkaSourceEnumerator
> ---
>
> Key: FLINK-22147
> URL: https://issues.apache.org/jira/browse/FLINK-22147
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Currently the logic of partition discovery is: the worker thread checks if 
> there's new partitions and initialize new splits if so, then coordinator 
> thread marks these splits as pending and try to make assignments.
> Under current design, the worker thread needs to keep an internal data 
> structure tracking already discovered partitions, which is duplicated with 
> pending splits + assigned partitions tracked by coordinator thread. Usually 
> this kind of double-bookkeeping is fragile. 
> Another issue is that the worker thread always fetches descriptions of ALL 
> topics at partition discovery, which will comes to a problem working with a 
> giant Kafka clusters with millions of topics/partitions. 
> In order to fix issues above, a refactor is needed for the partition 
> discovery logic in Kafka enumerator. Basically the logic can be changed to:
>  # The worker thread fetches descriptions of subscribed topics/partitions, 
> then hands over to coordinator thread
>  # The coordinator thread filters out already discovered partitions (pending 
> + assigned partitions), then invokes worker thread with {{callAsync}} to 
> fetch offsets for new partitions
>  #  The worker thread fetches offsets and creates splits for new partitions, 
> then hands over new splits to coordinator thread
>  # The coordinator thread marks these splits as pending and try to make 
> assignment. 
> Discussion of this issue can be found in 
> [https://github.com/apache/flink/pull/15461] .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-22147) Refactor Partition Discovery Logic in KafkaSourceEnumerator

2021-06-24 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned FLINK-22147:


Assignee: Qingsheng Ren

> Refactor Partition Discovery Logic in KafkaSourceEnumerator
> ---
>
> Key: FLINK-22147
> URL: https://issues.apache.org/jira/browse/FLINK-22147
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Currently the logic of partition discovery is: the worker thread checks if 
> there's new partitions and initialize new splits if so, then coordinator 
> thread marks these splits as pending and try to make assignments.
> Under current design, the worker thread needs to keep an internal data 
> structure tracking already discovered partitions, which is duplicated with 
> pending splits + assigned partitions tracked by coordinator thread. Usually 
> this kind of double-bookkeeping is fragile. 
> Another issue is that the worker thread always fetches descriptions of ALL 
> topics at partition discovery, which will comes to a problem working with a 
> giant Kafka clusters with millions of topics/partitions. 
> In order to fix issues above, a refactor is needed for the partition 
> discovery logic in Kafka enumerator. Basically the logic can be changed to:
>  # The worker thread fetches descriptions of subscribed topics/partitions, 
> then hands over to coordinator thread
>  # The coordinator thread filters out already discovered partitions (pending 
> + assigned partitions), then invokes worker thread with {{callAsync}} to 
> fetch offsets for new partitions
>  #  The worker thread fetches offsets and creates splits for new partitions, 
> then hands over new splits to coordinator thread
>  # The coordinator thread marks these splits as pending and try to make 
> assignment. 
> Discussion of this issue can be found in 
> [https://github.com/apache/flink/pull/15461] .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin commented on pull request #15531: [FLINK-22147][connector/kafka] Refactor partition discovery logic in Kafka source enumerator

2021-06-24 Thread GitBox


becketqin commented on pull request #15531:
URL: https://github.com/apache/flink/pull/15531#issuecomment-868210308


   @PatrickRen Thanks for updating the patch. Merged to master.
   1418a1ddd025adb3b502b8d7a89d0f338aa40c29


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin merged pull request #15531: [FLINK-22147][connector/kafka] Refactor partition discovery logic in Kafka source enumerator

2021-06-24 Thread GitBox


becketqin merged pull request #15531:
URL: https://github.com/apache/flink/pull/15531


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] EronWright commented on pull request #15950: [FLINK-22700] [api] Propagate watermarks to sink API

2021-06-24 Thread GitBox


EronWright commented on pull request #15950:
URL: https://github.com/apache/flink/pull/15950#issuecomment-868207493


   Thanks @AHeise  and @tzulitai  for the review, I'll push an update tomorrow.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Airblader commented on a change in pull request #16283: [FLINK-23149][table-code-splitter] Introduce the new Java code splitter module

2021-06-24 Thread GitBox


Airblader commented on a change in pull request #16283:
URL: https://github.com/apache/flink/pull/16283#discussion_r658472428



##
File path: 
flink-table/flink-table-code-splitter/src/main/java/org/apache/flink/table/codesplit/CodeSplitUtil.java
##
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.codesplit;
+
+import org.antlr.v4.runtime.CharStream;
+import org.antlr.v4.runtime.ParserRuleContext;
+import org.antlr.v4.runtime.misc.Interval;
+
+import java.util.concurrent.atomic.AtomicLong;
+
+/** Utils for rewriters. */
+public class CodeSplitUtil {

Review comment:
   All classes seem to be missing `@Internal` annotations.

##
File path: flink-table/flink-table-code-splitter/pom.xml
##
@@ -0,0 +1,91 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+
+   4.0.0
+
+   
+   flink-table
+   org.apache.flink
+   1.14-SNAPSHOT
+   ..
+   
+
+   flink-table-code-splitter
+   Flink : Table : Code Splitter 
+   
+   This module contains a tool to split generated Java code
+   so that each method does not exceed the limit of 64KB.
+   Antlr grammar files are copied from the official antlr 
repository

Review comment:
   Given that files are copied verbatim from other projects, how will this 
be handled in terms of licensing? 

##
File path: flink-table/flink-table-code-splitter/src/main/antlr4/JavaLexer.g4
##
@@ -0,0 +1,184 @@
+/**

Review comment:
   The original BSD license has been removed. I think this violates the 
licensing terms.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-23149) Introduce Java code splitter for code generation

2021-06-24 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-23149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ingo Bürk reassigned FLINK-23149:
-

Assignee: Caizhi Weng

> Introduce Java code splitter for code generation
> 
>
> Key: FLINK-23149
> URL: https://issues.apache.org/jira/browse/FLINK-23149
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> In this first step we introduce a Java code splitter for the generated Java 
> code. This splitter comes into effect after the original Java code is 
> generated, hoping to solve the 64KB problem in one shot.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22610) The test-jar and test-source-jar of flink-connector-kafka should include all classes

2021-06-24 Thread LIU Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LIU Xiao updated FLINK-22610:
-
Description: 
The test-jar of old kafka connector (flink-connector-kafka-base and 
flink-connector-kafka-0.11) includes convenient utility classes 
(KafkaTestEnvironment and KafkaTestEnvironmentImpl, etc.) to start an embedded 
kafka in unit test, and we used the utility classes to build some test cases 
for our project.

Now the utility classes other than KafkaTestEnvironmentImpl seem to be gone in 
test-jar of new kafka connector (flink-connector-kafka), and I find that is 
because they are not included in the configuration of maven-jar-plugin in 
pom.xml:
{code:xml}
...

org.apache.maven.plugins
maven-jar-plugin



test-jar



**/KafkaTestEnvironmentImpl*
META-INF/LICENSE
META-INF/NOTICE





...
{code}
This configuration seems to be inherited from flink-connector-kafka-0.11, but 
actually the configuration of flink-connector-kafka-base should be used:
{code:xml}
...

maven-jar-plugin



test-jar




...
{code}
The test-source-jar has similar problem:
{code:xml}
...

org.apache.maven.plugins
maven-source-plugin


attach-test-sources

test-jar-no-fork




false


**/KafkaTestEnvironmentImpl*
META-INF/LICENSE
META-INF/NOTICE





...
{code}
I think it should be:
{code:xml}
...

org.apache.maven.plugins
maven-source-plugin


attach-test-sources

test-jar-no-fork




false





...
{code}

  was:
The test-jar of old kafka connector (flink-connector-kafka-base and 
flink-connector-kafka-0.11) includes convenient utility classes 
(KafkaTestEnvironment and KafkaTestEnvironmentImpl, etc.) to start an embedded 
kafka in unit test, and we used the utility classes to build some test cases 
for our project.

Now the utility classes other than KafkaTestEnvironmentImpl seem to be gone in 
test-jar of new kafka connector (flink-connector-kafka), and I find that is 
because they are not included in the configuration of maven-jar-plugin in 
pom.xml:
{code:java}



org.apache.maven.plugins
maven-jar-plugin



test-jar



**/KafkaTestEnvironmentImpl*
META-INF/LICENSE
META-INF/NOTICE





{code}
This configuration seems to be inherited from flink-connector-kafka-0.11, but I 
think the configuration of flink-connector-kafka-base should be used:
{code:java}
  

  
maven-jar-plugin

  

  test-jar

  

  
{code}


> The test-jar and test-source-jar of flink-connector-kafka should include all 
> classes
> 
>
> Key: FLINK-22610
> URL: https://issues.apache.org/jira/browse/FLINK-22610
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.12.3
>Reporter: LIU Xiao
>Priority: Minor
>
> The test-jar of old kafka connector (flink-connector-kafka-base and 
> flink-connector-kafka-0.11) includes convenient utility classes 
> (KafkaTestEnvironment and KafkaTestEnvironmentImpl, etc.) to start an 
> embedded kafka in unit test, and we used the utility classes to build some 
> test cases for our project.
> Now the utility classes other than KafkaTestEnvironmentImpl seem to be gone 
> in test-jar of new kafka connector (flink-connector-kafka), and I find that 
> is because they are not included in the configuration of maven-jar-plugin in 
> pom.xml:
> {code:xml}
> ...
> 
> org.apache.maven.plugins
> maven-jar-plugin
> 
> 
> 
> test-jar
> 
> 
> 
> **/KafkaTestEnvironmentImpl*
> META-INF/LICENSE
> META-INF/NOTICE
> 
> 
> 
> 
> 
> ...
> {code}
> 

[jira] [Updated] (FLINK-22610) The test-jar and test-source-jar of flink-connector-kafka should include all classes

2021-06-24 Thread LIU Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LIU Xiao updated FLINK-22610:
-
Summary: The test-jar and test-source-jar of flink-connector-kafka should 
include all classes  (was: The test-jar and test-source-jar of 
flink-connector-kafka should include all test classes)

> The test-jar and test-source-jar of flink-connector-kafka should include all 
> classes
> 
>
> Key: FLINK-22610
> URL: https://issues.apache.org/jira/browse/FLINK-22610
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.12.3
>Reporter: LIU Xiao
>Priority: Minor
>
> The test-jar of old kafka connector (flink-connector-kafka-base and 
> flink-connector-kafka-0.11) includes convenient utility classes 
> (KafkaTestEnvironment and KafkaTestEnvironmentImpl, etc.) to start an 
> embedded kafka in unit test, and we used the utility classes to build some 
> test cases for our project.
> Now the utility classes other than KafkaTestEnvironmentImpl seem to be gone 
> in test-jar of new kafka connector (flink-connector-kafka), and I find that 
> is because they are not included in the configuration of maven-jar-plugin in 
> pom.xml:
> {code:java}
> 
> 
> 
> org.apache.maven.plugins
> maven-jar-plugin
> 
> 
> 
> test-jar
> 
> 
> 
> **/KafkaTestEnvironmentImpl*
> META-INF/LICENSE
> META-INF/NOTICE
> 
> 
> 
> 
> 
> {code}
> This configuration seems to be inherited from flink-connector-kafka-0.11, but 
> I think the configuration of flink-connector-kafka-base should be used:
> {code:java}
>   
> 
>   
> maven-jar-plugin
> 
>   
> 
>   test-jar
> 
>   
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22610) The test-jar and test-source-jar of flink-connector-kafka should include all test classes

2021-06-24 Thread LIU Xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LIU Xiao updated FLINK-22610:
-
Summary: The test-jar and test-source-jar of flink-connector-kafka should 
include all test classes  (was: The test-jar of flink-connector-kafka should 
include all test classes)

> The test-jar and test-source-jar of flink-connector-kafka should include all 
> test classes
> -
>
> Key: FLINK-22610
> URL: https://issues.apache.org/jira/browse/FLINK-22610
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.12.3
>Reporter: LIU Xiao
>Priority: Minor
>
> The test-jar of old kafka connector (flink-connector-kafka-base and 
> flink-connector-kafka-0.11) includes convenient utility classes 
> (KafkaTestEnvironment and KafkaTestEnvironmentImpl, etc.) to start an 
> embedded kafka in unit test, and we used the utility classes to build some 
> test cases for our project.
> Now the utility classes other than KafkaTestEnvironmentImpl seem to be gone 
> in test-jar of new kafka connector (flink-connector-kafka), and I find that 
> is because they are not included in the configuration of maven-jar-plugin in 
> pom.xml:
> {code:java}
> 
> 
> 
> org.apache.maven.plugins
> maven-jar-plugin
> 
> 
> 
> test-jar
> 
> 
> 
> **/KafkaTestEnvironmentImpl*
> META-INF/LICENSE
> META-INF/NOTICE
> 
> 
> 
> 
> 
> {code}
> This configuration seems to be inherited from flink-connector-kafka-0.11, but 
> I think the configuration of flink-connector-kafka-base should be used:
> {code:java}
>   
> 
>   
> maven-jar-plugin
> 
>   
> 
>   test-jar
> 
>   
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Airblader commented on pull request #16230: [hotfix][docs] Updated description of HOP and CUMULATE parameters

2021-06-24 Thread GitBox


Airblader commented on pull request #16230:
URL: https://github.com/apache/flink/pull/16230#issuecomment-868196261


   I'm guessing the PR needs to be rebased so that CI can pass.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23152) Flink - {"errors":["Service temporarily unavailable due to an ongoing leader election. Please refresh."]}

2021-06-24 Thread liu zhuang (Jira)
liu zhuang created FLINK-23152:
--

 Summary: Flink - {"errors":["Service temporarily unavailable due 
to an ongoing leader election. Please refresh."]}
 Key: FLINK-23152
 URL: https://issues.apache.org/jira/browse/FLINK-23152
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN
Affects Versions: 1.12.1
 Environment: Apache Hadoop Yarn version is 3.11

Apache Zookeeper version is 3.5.5
Reporter: liu zhuang
 Attachments: jobmanager log.png

Hi Team,

I deployed  Flink  cluster on yarn-application mode.. JobManager HA with Apache 
ZooKeeper to deploy .. I use the following command to submit the flink job ,

 

flink run-application -t yarn-application \
-c com.model \
-Djobmanager.memory.process.size=1024m \
-Dtaskmanager.memory.process.size=1024m \
-Dyarn.provided.lib.dirs="hdfs:///user/flink/libs;hdfs:///user/flink/plugins" \
-Dyarn.application.name="test" \
hdfs:///user/flink/jars/model-1.0.jar

 

But Flink Web UI is throwing an error saying

"{"errors":["Service temporarily unavailable due to an ongoing leader election. 
Please refresh."]}".

 

In the  zookeeper node ,this flink job is "[jobgraphs,leader,leaderlatch]" 

{color:#FF}Attached is the jobmanager log of this flink job. At the same 
time, when I use the Flink Per-job mode to submit the job, it can run 
normally.{color}

Please help to fix the issue with UI and HA.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16215: [FLINK-23023][table-planner-blink] Support offset in window TVF.

2021-06-24 Thread GitBox


flinkbot edited a comment on pull request #16215:
URL: https://github.com/apache/flink/pull/16215#issuecomment-864821802


   
   ## CI report:
   
   * 6f41569066414b5e99282bce5511fd52ac4111b1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19278)
 
   * ffc2f22cf926fd97e18b6d9e527d4e6f5556b91f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19512)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Tartarus0zm edited a comment on pull request #16275: [FLINK-20518][rest] Add decoding characters for MessageQueryParameter

2021-06-24 Thread GitBox


Tartarus0zm edited a comment on pull request #16275:
URL: https://github.com/apache/flink/pull/16275#issuecomment-868191218


   > > Do you think that special characters should be encoded on the UI side? 
Instead of decoding it on the server again?
   > 
   > yes and yes. I did a manual check when reviewing #13514 and confirmed that 
our REST API handles escaped special characters fine. So this is purely a UI 
issue.
   
   @zentol 
   I add some log on UI and REST server, then I found,
   UI send 
`0.GroupWindowAggregate(window=[TumblingGroupWindow(%27w$__rowtime__6)]__properti.watermarkLatency`
   but REST server received is 
`0.GroupWindowAggregate(window=%5BTumblingGroupWindow(%252527w$__rowtime__6)%5D__properti.watermarkLatency`
 by `RouterHandler`, this has been encoded 3 times,
   `QueryStringDecoder` decode only once, so this happened.
   
   Do you want anything else to discover?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Tartarus0zm commented on pull request #16275: [FLINK-20518][rest] Add decoding characters for MessageQueryParameter

2021-06-24 Thread GitBox


Tartarus0zm commented on pull request #16275:
URL: https://github.com/apache/flink/pull/16275#issuecomment-868191218


   > > Do you think that special characters should be encoded on the UI side? 
Instead of decoding it on the server again?
   > 
   > yes and yes. I did a manual check when reviewing #13514 and confirmed that 
our REST API handles escaped special characters fine. So this is purely a UI 
issue.
   
   @zentol 
   I add some log on UI and REST server, then I found,
   UI send 
`0.GroupWindowAggregate(window=[TumblingGroupWindow(%27w$__rowtime__6)]__properti.watermarkLatency`
   but REST server received is 
`0.GroupWindowAggregate(window=%5BTumblingGroupWindow(%252527w$__rowtime__6)%5D__properti.watermarkLatency`
 by RouterHandler, this has been encoded 3 times,
   `QueryStringDecoder` decode only once, so this happened.
   
   Do you want anything else to discover?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #16245: [FLINK-22940][SQL-CLIENT] Make sql client column max width configurable

2021-06-24 Thread GitBox


wuchong commented on a change in pull request #16245:
URL: https://github.com/apache/flink/pull/16245#discussion_r658452346



##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/config/SqlClientOptions.java
##
@@ -53,4 +55,16 @@ private SqlClientOptions() {}
 .defaultValue(false)
 .withDescription(
 "Determine whether to output the verbose output to 
the console. If set the option true, it will print the exception stack. 
Otherwise, it only output the cause.");
+
+// Display options
+
+@Documentation.TableOption(execMode = 
Documentation.ExecMode.BATCH_STREAMING)
+public static final ConfigOption DISPLAY_MAX_COLUMN_WIDTH =
+ConfigOptions.key("sql-client.display.max_column_width")

Review comment:
   Would be better to use `sql-client.display.max-column-width`, Flink 
configuration doesn't use `_` as separator. 

##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/ResultDescriptor.java
##
@@ -29,21 +35,17 @@
 
 private final boolean isMaterialized;
 
-private final boolean isTableauMode;
-
-private final boolean isStreamingMode;
+public final ReadableConfig config;

Review comment:
   We can add a `public int getMaxColumnWidth()` method just like 
`isStreamingMode`, so that we don't need to expose the whole `config`. And 
users don't need to remember the config name when using, e.g. 
`resultDescriptor.config.get(DISPLAY_MAX_COLUMN_WIDTH)`.

##
File path: docs/content/docs/dev/table/sqlClient.md
##
@@ -60,106 +59,72 @@ or explicitly use `embedded` mode:
 ./bin/sql-client.sh embedded
 ```
 
+See [SQL Client startup options](#sql-client-startup-options) below for more 
details.
+
 ### Running SQL Queries
 
-Once the CLI has been started, you can use the `HELP` command to list all 
available SQL statements.
-For validating your setup and cluster connection, you can enter your first SQL 
query and press the `Enter` key to execute it:
+For validating your setup and cluster connection, you can enter the simple 
query below and press `Enter` to execute it.
 
 ```sql
-SELECT 'Hello World';
+SELECT 
+  name, 
+  COUNT(*) AS cnt 
+FROM 
+  (VALUES ('Bob'), ('Alice'), ('Greg'), ('Bob')) AS NameTable(name) 
+GROUP BY name;
 ```
 
-This query requires no table source and produces a single row result. The CLI 
will retrieve results
-from the cluster and visualize them. You can close the result view by pressing 
the `Q` key.
-
-The CLI supports **three modes** for maintaining and visualizing results.
-
-The **table mode** materializes results in memory and visualizes them in a 
regular, paginated table representation.
-It can be enabled by executing the following command in the CLI:
+The SQL client will retrieve the results from the cluster and visualize them 
(you can close the result view by pressing the `Q` key):
 
 ```text
-SET 'sql-client.execution.result-mode' = 'table';
+   name  cnt
+  Alice1
+   Greg1
+Bob2
 ```
 
-The **changelog mode** does not materialize results and visualizes the result 
stream that is produced
-by a [continuous query]({{< ref "docs/dev/table/concepts/dynamic_tables" 
>}}#continuous-queries) consisting of insertions (`+`) and retractions (`-`).
+The `SET` command allows you to tune the job execution and the sql client 
behaviour. See [SQL Client Configuration](#sql-client-configuration) below for 
more details.
 
-```text
-SET 'sql-client.execution.result-mode' = 'changelog';
-```
+After a query is defined, it can be submitted to the cluster as a 
long-running, detached Flink job. 
+The [configuration section](#configuration) explains how to declare table 
sources for reading data, how to declare table sinks for writing data, and how 
to configure other table program properties.
 
-The **tableau mode** is more like a traditional way which will display the 
results in the screen directly with a tableau format.
-The displaying content will be influenced by the query execution 
type(`execution.type`).
 
-```text
-SET 'sql-client.execution.result-mode' = 'tableau';
-```
-
-Note that when you use this mode with streaming query, the result will be 
continuously printed on the console. If the input data of
-this query is bounded, the job will terminate after Flink processed all input 
data, and the printing will also be stopped automatically.
-Otherwise, if you want to terminate a running query, just type `CTRL-C` in 
this case, the job and the printing will be stopped.
+### Getting help
 
-You can use the following query to see all the result modes in action:
-
-```sql
-SELECT name, COUNT(*) AS cnt FROM (VALUES ('Bob'), ('Alice'), ('Greg'), 
('Bob')) AS NameTable(name) GROUP BY name;

[GitHub] [flink] beyond1920 commented on pull request #16219: [FLINK-22781][table-planner-blink] Fix bug in emit behavior of GroupWindowAggregate to skip emit window result if input stream of GroupWin

2021-06-24 Thread GitBox


beyond1920 commented on pull request #16219:
URL: https://github.com/apache/flink/pull/16219#issuecomment-868185126


   @cshuo Would you please have a look at the pull request, thanks a lot. cc 
@godfreyhe 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache closed pull request #16182: [FLINK-23010][hive] HivePartitionFetcherContextBase shouldn't list fo…

2021-06-24 Thread GitBox


lirui-apache closed pull request #16182:
URL: https://github.com/apache/flink/pull/16182


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16283: [FLINK-23149][table-code-splitter] Introduce the new Java code splitter module

2021-06-24 Thread GitBox


flinkbot edited a comment on pull request #16283:
URL: https://github.com/apache/flink/pull/16283#issuecomment-868177111


   
   ## CI report:
   
   * e4990b25ca06e124e3d84da5b2dd77580340975c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19509)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16284: [FLINK-23142][table] UpdatableTopNFunction output wrong order in the same unique key

2021-06-24 Thread GitBox


flinkbot edited a comment on pull request #16284:
URL: https://github.com/apache/flink/pull/16284#issuecomment-868177152


   
   ## CI report:
   
   * d85f933fd10b9e6ecce5b8a87f82cac5fce6a368 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19510)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16215: [FLINK-23023][table-planner-blink] Support offset in window TVF.

2021-06-24 Thread GitBox


flinkbot edited a comment on pull request #16215:
URL: https://github.com/apache/flink/pull/16215#issuecomment-864821802


   
   ## CI report:
   
   * 6f41569066414b5e99282bce5511fd52ac4111b1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19278)
 
   * ffc2f22cf926fd97e18b6d9e527d4e6f5556b91f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19754) Cannot have more than one execute() or executeAsync() call in a single environment.

2021-06-24 Thread roberto hashioka (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369225#comment-17369225
 ] 

roberto hashioka commented on FLINK-19754:
--

I'm also experiencing this issue. Is there a way to workaround this limitation, 
but still using the API to submit the job?

> Cannot have more than one execute() or executeAsync() call in a single 
> environment.
> ---
>
> Key: FLINK-19754
> URL: https://issues.apache.org/jira/browse/FLINK-19754
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.2
>Reporter: little-tomato
>Priority: Major
>
> I run this code on my Standalone Cluster. When i submit the job,the error log 
> is as follows:
> {code}
> 2020-10-20 11:53:42,969 WARN 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner [] - 
> Could not execute application: 
>  org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Cannot have more than one execute() or executeAsync() call 
> in a single environment.
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149) 
> ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
>  ~[flink-dist_2.12-1.11.2.jar:1.11.2]
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>  [?:1.8.0_221]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_221]
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_221]
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [?:1.8.0_221]
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [?:1.8.0_221]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_221]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_221]
>  at java.lang.Thread.run(Thread.java:748) [?:1.8.0_221]
>  Caused by: org.apache.flink.util.FlinkRuntimeException: Cannot have more 
> than one execute() or executeAsync() call in a single environment.
>  at 
> org.apache.flink.client.program.StreamContextEnvironment.validateAllowedExecution(StreamContextEnvironment.java:139)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:127)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:76)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1697)
>  ~[flink-dist_2.12-1.11.2.jar:1.11.2]
>  at cn.cuiot.dmp.ruleengine.job.RuleEngineJob.main(RuleEngineJob.java:556) 
> ~[?:?]
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_221]
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_221]
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_221]
>  at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_221]
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
> {code}
> my code is:
> {code:java}
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>  EnvironmentSettings bsSettings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, bsSettings);
>  ...
>  FlinkKafkaConsumer myConsumer = new 
> FlinkKafkaConsumer("kafkatopic", new SimpleStringSchema(), 
> properties);
>  myConsumer.setStartFromLatest();
> DataStream kafkaDataStream = env.addSource(myConsumer);
> Sin

[GitHub] [flink] beyond1920 edited a comment on pull request #16215: [FLINK-23023][table-planner-blink] Support offset in window TVF.

2021-06-24 Thread GitBox


beyond1920 edited a comment on pull request #16215:
URL: https://github.com/apache/flink/pull/16215#issuecomment-868182128


   @wenlong88 Window offset has no effect on watermark. Whether it would effect 
time attribute depends on each window operator. For window aggregate, time 
attribute is (window_end - 1). window_end would be shifted if set window offset.
   
   For example, if user needs to calculate PV every 1 hour, but wanna window 
begin at 05:00 instead of 00:00 of each hour.
   The sql is as following. window offset is 5 minute, it would not effect on 
watermark, rowtime attribute would be at -MM-DD hh:04:59.999Z instead of 
-MM-DD hh-1:59:59.999Z. 
   
   `
   SELECT
   `product`,
   window_start,
   window_end,
   COUNT(`user`) AS pv
   FROM TABLE(
 TUMBLE(TABLE temp_table, DESCRIPTOR(rowtime), INTERVAL '1' HOUR, 
INTERVAL '5' MINUTE))
   GROUP BY product, window_start, window_end
   ` 
   
   I add a cascaded window aggregate with window offset in 
`WindowAggregateITCase#testCascadeEventTimeTumbleWindowWithOffset`, please have 
a look. 
   Thanks a lot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] beyond1920 commented on pull request #16215: [FLINK-23023][table-planner-blink] Support offset in window TVF.

2021-06-24 Thread GitBox


beyond1920 commented on pull request #16215:
URL: https://github.com/apache/flink/pull/16215#issuecomment-868182128


   @wenlong88 Window offset has no effect on watermark. Whether it would effect 
time attribute depends on each window operator. For window aggregate, time 
attribute is (window_end - 1). window_end would be shifted if set window offset.
   
   For example, if user needs to calculate PV every 1 hour, but wanna window 
begin at 05:00 instead of 00:00 of each hour.
   The sql is as following. window offset is 5 minute, it would not effect on 
watermark, rowtime attribute would be at -MM-DD hh:04:59.999Z instead of 
-MM-DD hh-1:59:59.999Z. 
   `
   SELECT
   `product`,
   window_start,
   window_end,
   COUNT(`user`) AS pv
   FROM TABLE(
 TUMBLE(TABLE temp_table, DESCRIPTOR(rowtime), INTERVAL '1' HOUR, 
INTERVAL '5' MINUTE))
   GROUP BY product, window_start, window_end
   ` 
   I add a cascaded window aggregate with window offset in 
`WindowAggregateITCase#testCascadeEventTimeTumbleWindowWithOffset`, please have 
a look. 
   Thanks a lot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23009) Bump up Guava in Kinesis Connector

2021-06-24 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369223#comment-17369223
 ] 

Xintong Song commented on FLINK-23009:
--

[~iemre] [~dannycranmer]
The test failure in PR#16195 is not unrelated. The error reported is different 
from FLINK-22492. The test starts to fail constantly in the 1.13 branch since 
the PR being merged.

> Bump up Guava in Kinesis Connector
> --
>
> Key: FLINK-23009
> URL: https://issues.apache.org/jira/browse/FLINK-23009
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Emre Kartoglu
>Assignee: Emre Kartoglu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.12.5, 1.13.2
>
>
> *Background*
> We maintain a copy of the Flink connector in our AWS GitHub group: 
> [https://github.com/awslabs/amazon-kinesis-connector-flink]
> We've recently upgraded the Guava library in our AWS copy as the version we 
> were using was quite old and had incompatible interface differences with the 
> later and more commonly used Guava versions. As part of this ticket we'll be 
> applying the same changes in the Flink repo 
> [https://github.com/apache/flink/tree/master/flink-connectors/flink-connector-kinesis]
>  
> *Scope*
>  * Upgrade Guava library in pom.xml
>  * Switch to 3-arg version of Guava Futures.addCallback method call, as the 
> old 2-arg version is no longer supported
> *Result*
> All existing and new tests should pass
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ closed pull request #16179: [TESTING] Try to reproduce FLINK-22891

2021-06-24 Thread GitBox


KarmaGYZ closed pull request #16179:
URL: https://github.com/apache/flink/pull/16179


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23009) Bump up Guava in Kinesis Connector

2021-06-24 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369219#comment-17369219
 ] 

Xintong Song commented on FLINK-23009:
--

reverted release 1.13: 7d71b0b0d771e456460be9984f3d95b4d1500aa2

> Bump up Guava in Kinesis Connector
> --
>
> Key: FLINK-23009
> URL: https://issues.apache.org/jira/browse/FLINK-23009
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Emre Kartoglu
>Assignee: Emre Kartoglu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.12.5, 1.13.2
>
>
> *Background*
> We maintain a copy of the Flink connector in our AWS GitHub group: 
> [https://github.com/awslabs/amazon-kinesis-connector-flink]
> We've recently upgraded the Guava library in our AWS copy as the version we 
> were using was quite old and had incompatible interface differences with the 
> later and more commonly used Guava versions. As part of this ticket we'll be 
> applying the same changes in the Flink repo 
> [https://github.com/apache/flink/tree/master/flink-connectors/flink-connector-kinesis]
>  
> *Scope*
>  * Upgrade Guava library in pom.xml
>  * Switch to 3-arg version of Guava Futures.addCallback method call, as the 
> old 2-arg version is no longer supported
> *Result*
> All existing and new tests should pass
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-23151) KinesisTableApiITCase.testTableApiSourceAndSink fails on azure

2021-06-24 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-23151.

Resolution: Fixed

> KinesisTableApiITCase.testTableApiSourceAndSink fails on azure
> --
>
> Key: FLINK-23151
> URL: https://issues.apache.org/jira/browse/FLINK-23151
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.1
>Reporter: Xintong Song
>Priority: Major
> Fix For: 1.13.2
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=28179
> {code}
> Jun 25 00:59:29 [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 22.764 s <<< FAILURE! - in 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase
> Jun 25 00:59:29 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 22.027 s  <<< FAILURE!
> Jun 25 00:59:29 java.lang.AssertionError: 
> expected:<[org.apache.flink.streaming.kinesis.test.model.Order@bed, 
> org.apache.flink.streaming.kinesis.test.model.Order@c11, 
> org.apache.flink.streaming.kinesis.test.model.Order@c35]> but was:<[]>
> Jun 25 00:59:29   at org.junit.Assert.fail(Assert.java:88)
> Jun 25 00:59:29   at org.junit.Assert.failNotEquals(Assert.java:834)
> Jun 25 00:59:29   at org.junit.Assert.assertEquals(Assert.java:118)
> Jun 25 00:59:29   at org.junit.Assert.assertEquals(Assert.java:144)
> Jun 25 00:59:29   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.testTableApiSourceAndSink(KinesisTableApiITCase.java:111)
> Jun 25 00:59:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 25 00:59:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 25 00:59:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 25 00:59:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jun 25 00:59:29   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jun 25 00:59:29   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jun 25 00:59:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jun 25 00:59:29   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jun 25 00:59:29   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jun 25 00:59:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jun 25 00:59:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Jun 25 00:59:29   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Jun 25 00:59:29   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> Jun 25 00:59:29   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Jun 25 00:59:29   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23151) KinesisTableApiITCase.testTableApiSourceAndSink fails on azure

2021-06-24 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369218#comment-17369218
 ] 

Xintong Song commented on FLINK-23151:
--

The test case starts to constantly fail on the 1.13 branch since FLINK-23009 is 
merged.
Reverted in 7d71b0b0d771e456460be9984f3d95b4d1500aa2.

> KinesisTableApiITCase.testTableApiSourceAndSink fails on azure
> --
>
> Key: FLINK-23151
> URL: https://issues.apache.org/jira/browse/FLINK-23151
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.1
>Reporter: Xintong Song
>Priority: Major
> Fix For: 1.13.2
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=28179
> {code}
> Jun 25 00:59:29 [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 22.764 s <<< FAILURE! - in 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase
> Jun 25 00:59:29 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 22.027 s  <<< FAILURE!
> Jun 25 00:59:29 java.lang.AssertionError: 
> expected:<[org.apache.flink.streaming.kinesis.test.model.Order@bed, 
> org.apache.flink.streaming.kinesis.test.model.Order@c11, 
> org.apache.flink.streaming.kinesis.test.model.Order@c35]> but was:<[]>
> Jun 25 00:59:29   at org.junit.Assert.fail(Assert.java:88)
> Jun 25 00:59:29   at org.junit.Assert.failNotEquals(Assert.java:834)
> Jun 25 00:59:29   at org.junit.Assert.assertEquals(Assert.java:118)
> Jun 25 00:59:29   at org.junit.Assert.assertEquals(Assert.java:144)
> Jun 25 00:59:29   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.testTableApiSourceAndSink(KinesisTableApiITCase.java:111)
> Jun 25 00:59:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 25 00:59:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 25 00:59:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 25 00:59:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jun 25 00:59:29   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jun 25 00:59:29   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jun 25 00:59:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jun 25 00:59:29   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jun 25 00:59:29   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jun 25 00:59:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jun 25 00:59:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Jun 25 00:59:29   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Jun 25 00:59:29   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> Jun 25 00:59:29   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Jun 25 00:59:29   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #16284: [FLINK-23142][table] UpdatableTopNFunction output wrong order in the same unique key

2021-06-24 Thread GitBox


flinkbot commented on pull request #16284:
URL: https://github.com/apache/flink/pull/16284#issuecomment-868177152


   
   ## CI report:
   
   * d85f933fd10b9e6ecce5b8a87f82cac5fce6a368 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16283: [FLINK-23149][table-code-splitter] Introduce the new Java code splitter module

2021-06-24 Thread GitBox


flinkbot commented on pull request #16283:
URL: https://github.com/apache/flink/pull/16283#issuecomment-868177111


   
   ## CI report:
   
   * e4990b25ca06e124e3d84da5b2dd77580340975c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23151) KinesisTableApiITCase.testTableApiSourceAndSink fails on azure

2021-06-24 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369213#comment-17369213
 ] 

Xintong Song commented on FLINK-23151:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=739e6eac-8312-5d31-d437-294c4d26fced&t=a68b8d89-50e9-5977-4500-f4fde4f57f9b&l=27563

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=68a897ab-3047-5660-245a-cce8f83859f6&t=16ca2cca-2f63-5cce-12d2-d519b930a729&l=27748

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=3425d8ba-5f03-540a-c64b-51b8481bf7d6&l=27425

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=7f606211-1454-543c-70ab-c7a028a1ce8c&l=28031

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=6caf31d6-847a-526e-9624-468e053467d6&t=0b23652f-b18b-5b6e-6eb6-a11070364610&l=18007

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=4dd4dbdd-1802-5eb7-a518-6acd9d24d0fc&t=8d6b4dd3-4ca1-5611-1743-57a7d76b395a&l=16712

> KinesisTableApiITCase.testTableApiSourceAndSink fails on azure
> --
>
> Key: FLINK-23151
> URL: https://issues.apache.org/jira/browse/FLINK-23151
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.1
>Reporter: Xintong Song
>Priority: Major
> Fix For: 1.13.2
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=28179
> {code}
> Jun 25 00:59:29 [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 22.764 s <<< FAILURE! - in 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase
> Jun 25 00:59:29 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 22.027 s  <<< FAILURE!
> Jun 25 00:59:29 java.lang.AssertionError: 
> expected:<[org.apache.flink.streaming.kinesis.test.model.Order@bed, 
> org.apache.flink.streaming.kinesis.test.model.Order@c11, 
> org.apache.flink.streaming.kinesis.test.model.Order@c35]> but was:<[]>
> Jun 25 00:59:29   at org.junit.Assert.fail(Assert.java:88)
> Jun 25 00:59:29   at org.junit.Assert.failNotEquals(Assert.java:834)
> Jun 25 00:59:29   at org.junit.Assert.assertEquals(Assert.java:118)
> Jun 25 00:59:29   at org.junit.Assert.assertEquals(Assert.java:144)
> Jun 25 00:59:29   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.testTableApiSourceAndSink(KinesisTableApiITCase.java:111)
> Jun 25 00:59:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 25 00:59:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 25 00:59:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 25 00:59:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Jun 25 00:59:29   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jun 25 00:59:29   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jun 25 00:59:29   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jun 25 00:59:29   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jun 25 00:59:29   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jun 25 00:59:29   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jun 25 00:59:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jun 25 00:59:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jun 25 00:59:29   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> J

[jira] [Created] (FLINK-23151) KinesisTableApiITCase.testTableApiSourceAndSink fails on azure

2021-06-24 Thread Xintong Song (Jira)
Xintong Song created FLINK-23151:


 Summary: KinesisTableApiITCase.testTableApiSourceAndSink fails on 
azure
 Key: FLINK-23151
 URL: https://issues.apache.org/jira/browse/FLINK-23151
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis
Affects Versions: 1.13.1
Reporter: Xintong Song
 Fix For: 1.13.2


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=28179

{code}
Jun 25 00:59:29 [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 22.764 s <<< FAILURE! - in 
org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase
Jun 25 00:59:29 [ERROR] 
testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
  Time elapsed: 22.027 s  <<< FAILURE!
Jun 25 00:59:29 java.lang.AssertionError: 
expected:<[org.apache.flink.streaming.kinesis.test.model.Order@bed, 
org.apache.flink.streaming.kinesis.test.model.Order@c11, 
org.apache.flink.streaming.kinesis.test.model.Order@c35]> but was:<[]>
Jun 25 00:59:29 at org.junit.Assert.fail(Assert.java:88)
Jun 25 00:59:29 at org.junit.Assert.failNotEquals(Assert.java:834)
Jun 25 00:59:29 at org.junit.Assert.assertEquals(Assert.java:118)
Jun 25 00:59:29 at org.junit.Assert.assertEquals(Assert.java:144)
Jun 25 00:59:29 at 
org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.testTableApiSourceAndSink(KinesisTableApiITCase.java:111)
Jun 25 00:59:29 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Jun 25 00:59:29 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Jun 25 00:59:29 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Jun 25 00:59:29 at java.lang.reflect.Method.invoke(Method.java:498)
Jun 25 00:59:29 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
Jun 25 00:59:29 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Jun 25 00:59:29 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
Jun 25 00:59:29 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Jun 25 00:59:29 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Jun 25 00:59:29 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Jun 25 00:59:29 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
Jun 25 00:59:29 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Jun 25 00:59:29 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
Jun 25 00:59:29 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
Jun 25 00:59:29 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
Jun 25 00:59:29 at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
Jun 25 00:59:29 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
Jun 25 00:59:29 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
Jun 25 00:59:29 at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
Jun 25 00:59:29 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
Jun 25 00:59:29 at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
Jun 25 00:59:29 at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
Jun 25 00:59:29 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
Jun 25 00:59:29 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
Jun 25 00:59:29 at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
Jun 25 00:59:29 at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18464) ClassCastException during namespace serialization for checkpoint (Heap and RocksDB)

2021-06-24 Thread guxiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17368981#comment-17368981
 ] 

guxiang edited comment on FLINK-18464 at 6/25/21, 3:21 AM:
---

Hi [~sjwiesman] 

I am a little confused about your explanation.

 First of all, I want to make sure if you are looking at the new PR I refer to, 
[https://github.com/apache/flink/pull/16273/files] , not the PR that sharing 
state.This PR simply disallow change the  namespace  when  state descriptor and 
namespace serializer are different to current registered one , at the code 
level.   

I don't think this PR  involves anti-patterns. This PR will enforce the 
prohibition of this anti-pattern. 

Please see if there is something wrong with my understanding.

In addition, I think [~yunta]  performance concerns are due to the use of the 
LastState cache, but since I also used the LastNamespaceServer cache for 
judgment, I don't think there will be a performance drop.

WDYT?  (cc: [~roman_khachatryan] , [~yunta])


was (Author: guxiangfly):
Hi [~sjwiesman] 

I am a little confused about your explanation.

 First of all, I want to make sure if you are looking at the new PR I refer to, 
[https://github.com/apache/flink/pull/16273/files] , not the PR that sharing 
state.This PR simply disallow change the  namespace  when  state descriptor and 
namespace serializer are different to current registered one , at the code 
level.   

I don't think this PR  involves anti-patterns. This PR will enforce the 
prohibition of this anti-pattern. 

Please see if there is something wrong with my understanding

 

> ClassCastException during namespace serialization for checkpoint (Heap and 
> RocksDB)
> ---
>
> Key: FLINK-18464
> URL: https://issues.apache.org/jira/browse/FLINK-18464
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.9.3, 1.13.1
>Reporter: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-06-21-20-06-51-323.png, 
> image-2021-06-21-20-07-30-281.png, image-2021-06-21-20-07-43-246.png, 
> image-2021-06-21-20-33-39-295.png, image-2021-06-23-14-34-37-703.png, 
> image-2021-06-24-16-41-54-425.png, image-2021-06-24-17-51-53-734.png
>
>
> (see FLINK-23036 for error details with RocksDB)
>  
> From 
> [thread|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpoint-failed-because-of-TimeWindow-cannot-be-cast-to-VoidNamespace-td36310.html]
> {quote}I'm using flink 1.9 on Mesos and I try to use my own trigger and 
> evictor. The state is stored to memory.
> {quote}
>  
>   
> {code:java}
> input.setParallelism(processParallelism)
>         .assignTimestampsAndWatermarks(new UETimeAssigner)
>         .keyBy(_.key)
>         .window(TumblingEventTimeWindows.of(Time.minutes(20)))
>         .trigger(new MyTrigger)
>         .evictor(new MyEvictor)
>         .process(new MyFunction).setParallelism(aggregateParallelism)
>         .addSink(kafkaSink).setParallelism(sinkParallelism)
>         .name("kafka-record-sink"){code}
>  
>  
> {code:java}
> java.lang.Exception: Could not materialize checkpoint 1 for operator 
> Window(TumblingEventTimeWindows(120), JoinTrigger, JoinEvictor, 
> ScalaProcessWindowFunctionWrapper) -> Sink: kafka-record-sink (2/5).
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:1100)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1042)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.ClassCastException: 
> org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to 
> org.apache.flink.runtime.state.VoidNamespace
>  at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>  at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>  at 
> org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:450)
>  at 
> org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:47)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:1011)
>  
>  ... 3 more 
> Caused by: java.lang.ClassCastException: 
> org.apache.flink.streaming.api.windowing.windows.TimeWindow cannot be cast to 
> org.apache.flink.runtime.state.VoidNamespace
>  at 
> org.apac

[jira] [Updated] (FLINK-22110) KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when building FlinkContainer

2021-06-24 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22110:
-
Labels: auto-deprioritized-major test-stability  (was: test-stability)

> KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when 
> building FlinkContainer
> -
>
> Key: FLINK-22110
> URL: https://issues.apache.org/jira/browse/FLINK-22110
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=27232
> {code:java}
> Apr 06 00:08:58 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 0.007 s  <<< ERROR!
> Apr 06 00:08:58 java.lang.RuntimeException: Could not build the flink-dist 
> image
> Apr 06 00:08:58   at 
> org.apache.flink.tests.util.flink.FlinkContainer$FlinkContainerBuilder.build(FlinkContainer.java:280)
> Apr 06 00:08:58   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.(KinesisTableApiITCase.java:77)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 06 00:08:58   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 06 00:08:58   at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22110) KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when building FlinkContainer

2021-06-24 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22110:
-
Priority: Minor  (was: Major)

> KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when 
> building FlinkContainer
> -
>
> Key: FLINK-22110
> URL: https://issues.apache.org/jira/browse/FLINK-22110
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=27232
> {code:java}
> Apr 06 00:08:58 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 0.007 s  <<< ERROR!
> Apr 06 00:08:58 java.lang.RuntimeException: Could not build the flink-dist 
> image
> Apr 06 00:08:58   at 
> org.apache.flink.tests.util.flink.FlinkContainer$FlinkContainerBuilder.build(FlinkContainer.java:280)
> Apr 06 00:08:58   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.(KinesisTableApiITCase.java:77)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 06 00:08:58   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 06 00:08:58   at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-22110) KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when building FlinkContainer

2021-06-24 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22110:
-
Comment: was deleted

(was: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=68a897ab-3047-5660-245a-cce8f83859f6&t=16ca2cca-2f63-5cce-12d2-d519b930a729&l=27748)

> KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when 
> building FlinkContainer
> -
>
> Key: FLINK-22110
> URL: https://issues.apache.org/jira/browse/FLINK-22110
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=27232
> {code:java}
> Apr 06 00:08:58 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 0.007 s  <<< ERROR!
> Apr 06 00:08:58 java.lang.RuntimeException: Could not build the flink-dist 
> image
> Apr 06 00:08:58   at 
> org.apache.flink.tests.util.flink.FlinkContainer$FlinkContainerBuilder.build(FlinkContainer.java:280)
> Apr 06 00:08:58   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.(KinesisTableApiITCase.java:77)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 06 00:08:58   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 06 00:08:58   at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-22110) KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when building FlinkContainer

2021-06-24 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22110:
-
Comment: was deleted

(was: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=28179)

> KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when 
> building FlinkContainer
> -
>
> Key: FLINK-22110
> URL: https://issues.apache.org/jira/browse/FLINK-22110
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=27232
> {code:java}
> Apr 06 00:08:58 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 0.007 s  <<< ERROR!
> Apr 06 00:08:58 java.lang.RuntimeException: Could not build the flink-dist 
> image
> Apr 06 00:08:58   at 
> org.apache.flink.tests.util.flink.FlinkContainer$FlinkContainerBuilder.build(FlinkContainer.java:280)
> Apr 06 00:08:58   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.(KinesisTableApiITCase.java:77)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 06 00:08:58   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 06 00:08:58   at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-22110) KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when building FlinkContainer

2021-06-24 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22110:
-
Comment: was deleted

(was: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=739e6eac-8312-5d31-d437-294c4d26fced&t=a68b8d89-50e9-5977-4500-f4fde4f57f9b&l=27563)

> KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when 
> building FlinkContainer
> -
>
> Key: FLINK-22110
> URL: https://issues.apache.org/jira/browse/FLINK-22110
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=27232
> {code:java}
> Apr 06 00:08:58 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 0.007 s  <<< ERROR!
> Apr 06 00:08:58 java.lang.RuntimeException: Could not build the flink-dist 
> image
> Apr 06 00:08:58   at 
> org.apache.flink.tests.util.flink.FlinkContainer$FlinkContainerBuilder.build(FlinkContainer.java:280)
> Apr 06 00:08:58   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.(KinesisTableApiITCase.java:77)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 06 00:08:58   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 06 00:08:58   at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22110) KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when building FlinkContainer

2021-06-24 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369211#comment-17369211
 ] 

Xintong Song commented on FLINK-22110:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=68a897ab-3047-5660-245a-cce8f83859f6&t=16ca2cca-2f63-5cce-12d2-d519b930a729&l=27748

> KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when 
> building FlinkContainer
> -
>
> Key: FLINK-22110
> URL: https://issues.apache.org/jira/browse/FLINK-22110
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=27232
> {code:java}
> Apr 06 00:08:58 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 0.007 s  <<< ERROR!
> Apr 06 00:08:58 java.lang.RuntimeException: Could not build the flink-dist 
> image
> Apr 06 00:08:58   at 
> org.apache.flink.tests.util.flink.FlinkContainer$FlinkContainerBuilder.build(FlinkContainer.java:280)
> Apr 06 00:08:58   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.(KinesisTableApiITCase.java:77)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 06 00:08:58   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 06 00:08:58   at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22110) KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when building FlinkContainer

2021-06-24 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369210#comment-17369210
 ] 

Xintong Song commented on FLINK-22110:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=739e6eac-8312-5d31-d437-294c4d26fced&t=a68b8d89-50e9-5977-4500-f4fde4f57f9b&l=27563

> KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when 
> building FlinkContainer
> -
>
> Key: FLINK-22110
> URL: https://issues.apache.org/jira/browse/FLINK-22110
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=27232
> {code:java}
> Apr 06 00:08:58 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 0.007 s  <<< ERROR!
> Apr 06 00:08:58 java.lang.RuntimeException: Could not build the flink-dist 
> image
> Apr 06 00:08:58   at 
> org.apache.flink.tests.util.flink.FlinkContainer$FlinkContainerBuilder.build(FlinkContainer.java:280)
> Apr 06 00:08:58   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.(KinesisTableApiITCase.java:77)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 06 00:08:58   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 06 00:08:58   at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22110) KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when building FlinkContainer

2021-06-24 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22110:
-
Labels: test-stability  (was: auto-deprioritized-major test-stability)

> KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when 
> building FlinkContainer
> -
>
> Key: FLINK-22110
> URL: https://issues.apache.org/jira/browse/FLINK-22110
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=27232
> {code:java}
> Apr 06 00:08:58 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 0.007 s  <<< ERROR!
> Apr 06 00:08:58 java.lang.RuntimeException: Could not build the flink-dist 
> image
> Apr 06 00:08:58   at 
> org.apache.flink.tests.util.flink.FlinkContainer$FlinkContainerBuilder.build(FlinkContainer.java:280)
> Apr 06 00:08:58   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.(KinesisTableApiITCase.java:77)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 06 00:08:58   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 06 00:08:58   at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22110) KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when building FlinkContainer

2021-06-24 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22110:
-
Priority: Major  (was: Minor)

> KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when 
> building FlinkContainer
> -
>
> Key: FLINK-22110
> URL: https://issues.apache.org/jira/browse/FLINK-22110
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=27232
> {code:java}
> Apr 06 00:08:58 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 0.007 s  <<< ERROR!
> Apr 06 00:08:58 java.lang.RuntimeException: Could not build the flink-dist 
> image
> Apr 06 00:08:58   at 
> org.apache.flink.tests.util.flink.FlinkContainer$FlinkContainerBuilder.build(FlinkContainer.java:280)
> Apr 06 00:08:58   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.(KinesisTableApiITCase.java:77)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 06 00:08:58   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 06 00:08:58   at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22110) KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when building FlinkContainer

2021-06-24 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369209#comment-17369209
 ] 

Xintong Song commented on FLINK-22110:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19502&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=28179

> KinesisTableApiITCase.testTableApiSourceAndSink fails/hangs on Azure when 
> building FlinkContainer
> -
>
> Key: FLINK-22110
> URL: https://issues.apache.org/jira/browse/FLINK-22110
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=27232
> {code:java}
> Apr 06 00:08:58 [ERROR] 
> testTableApiSourceAndSink(org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase)
>   Time elapsed: 0.007 s  <<< ERROR!
> Apr 06 00:08:58 java.lang.RuntimeException: Could not build the flink-dist 
> image
> Apr 06 00:08:58   at 
> org.apache.flink.tests.util.flink.FlinkContainer$FlinkContainerBuilder.build(FlinkContainer.java:280)
> Apr 06 00:08:58   at 
> org.apache.flink.streaming.kinesis.test.KinesisTableApiITCase.(KinesisTableApiITCase.java:77)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Apr 06 00:08:58   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 06 00:08:58   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 06 00:08:58   at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 06 00:08:58   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 06 00:08:58   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 06 00:08:58   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16258: [FLINK-23120][python] Fix ByteArrayWrapperSerializer.serialize to use writeInt to serialize the length

2021-06-24 Thread GitBox


flinkbot edited a comment on pull request #16258:
URL: https://github.com/apache/flink/pull/16258#issuecomment-866824620


   
   ## CI report:
   
   * 58636681dabbdae5ef9993b87d29ab25f99efc89 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19508)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19443)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on a change in pull request #15055: [FLINK-21086][runtime][checkpoint] Support checkpoint alignment with finished input channels

2021-06-24 Thread GitBox


gaoyunhaii commented on a change in pull request #15055:
URL: https://github.com/apache/flink/pull/15055#discussion_r658442395



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/consumer/TestInputChannel.java
##
@@ -190,6 +190,9 @@ public void resumeConsumption() {
 isBlocked = false;
 }
 
+@Override
+public void acknowledgeAllRecordsProcessed() throws IOException {}

Review comment:
   Currently we should be able to throw `UnsupportedOperationException` and 
not affect the existing tests. I modified all the related  testing 
`InputChannel` and `InputGate` to throw `UnsupportedOperationException` for now.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   >