[GitHub] [flink] klion26 commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-20 Thread GitBox


klion26 commented on a change in pull request #12230:
URL: https://github.com/apache/flink/pull/12230#discussion_r428406792



##
File path: docs/getting-started/index.zh.md
##
@@ -27,61 +27,27 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-There are many ways to get started with Apache Flink. Which one is the best for
-you depends on your goals and prior experience:
+上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。
 
-* take a look at the **Docker Playgrounds** if you want to see what Flink can 
do, via a hands-on,
-  docker-based introduction to specific Flink concepts
-* explore one of the **Code Walkthroughs** if you want a quick, end-to-end
-  introduction to one of Flink's APIs
-* work your way through the **Hands-on Training** for a comprehensive,
-  step-by-step introduction to Flink
-* use **Project Setup** if you already know the basics of Flink and want a
-  project template for Java or Scala, or need help setting up the dependencies
+* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践可以了解 Flink 的基本概念和功能。
+* 通过阅读 **Code Walkthroughs** 小节可以快速了解 Flink API。
+* 通过阅读 **Hands-on Training** 章节可以逐步全面学习 Flink。
+* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。
 
-### Taking a first look at Flink
+### 初识 Flink
 
-The **Docker Playgrounds** provide sandboxed Flink environments that are set 
up in just a few minutes and which allow you to explore and play with Flink.
+通过 **Docker Playgrounds** 提供的 Flink 沙盒环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。
 
-* The [**Operations Playground**]({% link 
getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you 
how to operate streaming applications with Flink. You can experience how Flink 
recovers application from failures, upgrade and scale streaming applications up 
and down, and query application metrics.
+* [**Flink Operations Playground**]({% link 
getting-started/docker-playgrounds/flink-operations-playground.zh.md %}) 
向你展示了如何使用 Flink 编写流数据应用程序。你可以从中学习到 Flink 
应用程序的故障恢复、升级、并行度提高、并行度降低和程序运行状态的指标监控等特性。

Review comment:
   `并行度提高、并行度降低` 如果改成 `修改并行度` 会好一些吗?

##
File path: docs/getting-started/index.zh.md
##
@@ -27,61 +27,27 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-There are many ways to get started with Apache Flink. Which one is the best for
-you depends on your goals and prior experience:
+上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。
 
-* take a look at the **Docker Playgrounds** if you want to see what Flink can 
do, via a hands-on,
-  docker-based introduction to specific Flink concepts
-* explore one of the **Code Walkthroughs** if you want a quick, end-to-end
-  introduction to one of Flink's APIs
-* work your way through the **Hands-on Training** for a comprehensive,
-  step-by-step introduction to Flink
-* use **Project Setup** if you already know the basics of Flink and want a
-  project template for Java or Scala, or need help setting up the dependencies
+* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践可以了解 Flink 的基本概念和功能。
+* 通过阅读 **Code Walkthroughs** 小节可以快速了解 Flink API。
+* 通过阅读 **Hands-on Training** 章节可以逐步全面学习 Flink。
+* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。
 
-### Taking a first look at Flink
+### 初识 Flink
 
-The **Docker Playgrounds** provide sandboxed Flink environments that are set 
up in just a few minutes and which allow you to explore and play with Flink.
+通过 **Docker Playgrounds** 提供的 Flink 沙盒环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。
 
-* The [**Operations Playground**]({% link 
getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you 
how to operate streaming applications with Flink. You can experience how Flink 
recovers application from failures, upgrade and scale streaming applications up 
and down, and query application metrics.
+* [**Flink Operations Playground**]({% link 
getting-started/docker-playgrounds/flink-operations-playground.zh.md %}) 
向你展示了如何使用 Flink 编写流数据应用程序。你可以从中学习到 Flink 
应用程序的故障恢复、升级、并行度提高、并行度降低和程序运行状态的指标监控等特性。
 
-
-
-### First steps with one of Flink's APIs
-
-The **Code Walkthroughs** are a great way to get started quickly with a 
step-by-step introduction to
-one of Flink's APIs. Each walkthrough provides instructions for bootstrapping 
a small skeleton
-project, and then shows how to extend it to a simple application.
+### Flink API 入门
 
-* The [**DataStream API**  code walkthrough]({% link 
getting-started/walkthroughs/datastream_api.md %}) shows how
-  to implement a simple DataStream application and how to extend it to be 
stateful and use timers.
-  The DataStream API is Flink's main abstraction for implementing stateful 
streaming applications
-  with sophisticated time semantics in Java or Scala.
+**代码练习**是快速入门的最佳方式,通过代码练习可以逐步深入地理解 Flink API。每个示例都演示了如何构建基础的 Flink 
代码框架,并如何逐步将其扩展为简单的应用程序。
 
-* Flink's **Table API** is a relational API used for writing SQL-like queries 
in Java, Scala, or
-  Pyt

[jira] [Commented] (FLINK-17444) StreamingFileSink Azure HadoopRecoverableWriter class missing.

2020-05-20 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112724#comment-17112724
 ] 

Yang Wang commented on FLINK-17444:
---

[~kkl0u] I am not sure about how difficult is it to make all {{FileSystem}} 
could work with {{StreamingFileSink}}. Recently, our internal users also want 
to use {{StreamFileSink}} to write to aliyun {{OSSFileSystem}}. And it is just 
in the same situation. Is it possible to have a common way to do this? Or we 
have to do it one by one {{FileSystem}} specific implementation.

> StreamingFileSink Azure HadoopRecoverableWriter class missing.
> --
>
> Key: FLINK-17444
> URL: https://issues.apache.org/jira/browse/FLINK-17444
> Project: Flink
>  Issue Type: New Feature
>  Components: FileSystems
>Reporter: Marie May
>Priority: Minor
>
> Hello, I was recently attempting to use the Streaming File Sink to store data 
> to Azure and get an exception error that it is missing the 
> HadoopRecoverableWriter. When I searched if anyone else had the issue I came 
> across this post here on [Stack Overflow 
> |[https://stackoverflow.com/questions/61246683/flink-streamingfilesink-on-azure-blob-storage]].
>  Seeing no one responded, I asked about it on the mailing list and was told 
> to submit the issue here.
> This is exception message they posted below but the stack overflow post goes 
> into more details of where they believe the issue comes from. 
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/flink/fs/azure/common/hadoop/HadoopRecoverableWriter   
>  
> at 
> org.apache.flink.fs.azure.common.hadoop.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
>  
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory$ClassLoaderFixingFileSystem.createRecoverableWriter(PluginFileSystemFactory.java:129)
>   
> at 
> org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
>  
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.(Buckets.java:117)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:288)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:402)
>   
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
> 
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>   
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:284)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1006)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>   
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>  
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) 
>  
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)   
>  
> at java.lang.Thread.run(Thread.java:748)  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17817) CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase

2020-05-20 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112723#comment-17112723
 ] 

Caizhi Weng commented on FLINK-17817:
-

[~dwysakowicz] no problem
[~sewen] sorry for the delay... I used to want to wait for FLINK-17774. I've 
added a [quick fix|https://github.com/apache/flink/pull/12272] and will merge 
it asap.

> CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase
> -
>
> Key: FLINK-17817
> URL: https://issues.apache.org/jira/browse/FLINK-17817
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Caizhi Weng
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1826&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=f83cd372-208c-5ec4-12a8-337462457129
> {code}
> 2020-05-19T10:34:18.3224679Z [ERROR] 
> testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase)
>   Time elapsed: 7.537 s  <<< ERROR!
> 2020-05-19T10:34:18.3225273Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2020-05-19T10:34:18.3227634Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:92)
> 2020-05-19T10:34:18.3228518Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:63)
> 2020-05-19T10:34:18.3229170Z  at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.addAll(Iterators.java:361)
> 2020-05-19T10:34:18.3229863Z  at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Lists.newArrayList(Lists.java:160)
> 2020-05-19T10:34:18.3230586Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
> 2020-05-19T10:34:18.3231303Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:141)
> 2020-05-19T10:34:18.3231996Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:107)
> 2020-05-19T10:34:18.3232847Z  at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable(AggregateReduceGroupingITCase.scala:176)
> 2020-05-19T10:34:18.3233694Z  at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg(AggregateReduceGroupingITCase.scala:122)
> 2020-05-19T10:34:18.3234461Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-19T10:34:18.3234983Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-19T10:34:18.3235632Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-19T10:34:18.3236615Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-19T10:34:18.3237256Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-19T10:34:18.3237965Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-19T10:34:18.3238750Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-19T10:34:18.3239314Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-19T10:34:18.3239838Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-19T10:34:18.3240362Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-05-19T10:34:18.3240803Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-19T10:34:18.3243624Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-19T10:34:18.3244531Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-19T10:34:18.3245325Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-19T10:34:18.3246086Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-19T10:34:18.3246765Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-19T10:34:18.3247390Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-19T10:34:18.3248012Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-19T10:34:18.3248779Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-19T10:34:18.3249417Z  at 
> org.junit.runners.ParentRu

[GitHub] [flink] flinkbot commented on pull request #12272: [FLINK-17817] Fix type serializer duplication in CollectSinkFunction

2020-05-20 Thread GitBox


flinkbot commented on pull request #12272:
URL: https://github.com/apache/flink/pull/12272#issuecomment-631834610


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit ed5e8c901271495c93ed345f1750e43e6897e7e8 (Thu May 21 
02:06:22 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17819) Yarn error unhelpful when forgetting HADOOP_CLASSPATH

2020-05-20 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112721#comment-17112721
 ] 

Yang Wang commented on FLINK-17819:
---

Just a side input, we could not just verify the HADOOP_CLASSPATH environment. 
The {{flink-shaded-hadoop}} are widely used in many companies, which just 
because the Flink job deployer is not always running on a machine with hadoop 
installed.

> Yarn error unhelpful when forgetting HADOOP_CLASSPATH
> -
>
> Key: FLINK-17819
> URL: https://issues.apache.org/jira/browse/FLINK-17819
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Arvid Heise
>Assignee: Kostas Kloudas
>Priority: Critical
>  Labels: usability
> Fix For: 1.11.0
>
>
> When running
> {code:bash}
> flink run -m yarn-cluster -yjm 1768 -ytm 50072 -ys 32 ...
> {code}
> without some export HADOOP_CLASSPATH, we get the unhelpful message
> {noformat}
> Could not build the program from JAR file: JAR file does not exist: -yjm
> {noformat}
> I'd expect something like
> {noformat}
> yarn-cluster can only be used with exported HADOOP_CLASSPATH, see  for 
> more information{noformat}
>  
> I suggest to load a stub for YarnCluster deployment if the actual 
> implementation fails to load, which prints this error when used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TsReaper opened a new pull request #12272: [FLINK-17817] Fix type serializer duplication in CollectSinkFunction

2020-05-20 Thread GitBox


TsReaper opened a new pull request #12272:
URL: https://github.com/apache/flink/pull/12272


   ## What is the purpose of the change
   
   This is a hot fix for `CollectSinkFunction`. `TypeSerializer`s are not 
thread safe but currently `CollectSinkFunction` reuses them among two threads. 
This PR fixes this problem.
   
   Actually FLINK-17774 also solves this problem, but we should add a quick fix 
now to not shade other failures in tests.
   
   ## Brief change log
   
   - Fix serializer thread safe problem in CollectSinkFunction
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17745) PackagedProgram' extractedTempLibraries and jarfiles may be duplicate

2020-05-20 Thread Echo Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Echo Lee closed FLINK-17745.

Resolution: Not A Problem

> PackagedProgram' extractedTempLibraries and jarfiles may be duplicate
> -
>
> Key: FLINK-17745
> URL: https://issues.apache.org/jira/browse/FLINK-17745
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Reporter: Echo Lee
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>
> When i submit a flink app with a fat jar, PackagedProgram will extracted temp 
> libraries by the fat jar, and add to pipeline.jars, and the pipeline.jars 
> contains  fat jar and temp libraries. I don't think we should add fat jar to 
> the pipeline.jars if extractedTempLibraries is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 commented on pull request #12096: [FLINK-16074][docs-zh] Translate the Overview page for State & Fault Tolerance into Chinese

2020-05-20 Thread GitBox


klion26 commented on pull request #12096:
URL: https://github.com/apache/flink/pull/12096#issuecomment-631833147


   Rebased the newly master



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17745) PackagedProgram' extractedTempLibraries and jarfiles may be duplicate

2020-05-20 Thread Echo Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112719#comment-17112719
 ] 

Echo Lee commented on FLINK-17745:
--

[~fly_in_gis] Yes, this is not a problem.

[~kkl0u] Ok, i will close this issue. and thank you for initiating the 
discussion.

> PackagedProgram' extractedTempLibraries and jarfiles may be duplicate
> -
>
> Key: FLINK-17745
> URL: https://issues.apache.org/jira/browse/FLINK-17745
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Reporter: Echo Lee
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>
> When i submit a flink app with a fat jar, PackagedProgram will extracted temp 
> libraries by the fat jar, and add to pipeline.jars, and the pipeline.jars 
> contains  fat jar and temp libraries. I don't think we should add fat jar to 
> the pipeline.jars if extractedTempLibraries is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17850) PostgresCatalogITCase . testGroupByInsert() fails on CI

2020-05-20 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-17850:
---

Assignee: Jark Wu

> PostgresCatalogITCase . testGroupByInsert() fails on CI
> ---
>
> Key: FLINK-17850
> URL: https://issues.apache.org/jira/browse/FLINK-17850
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Stephan Ewen
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0, 1.12.0
>
>
> {{org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase . 
> testGroupByInsert}}
> Error:
> {code}
> 2020-05-20T16:36:33.9647037Z org.apache.flink.table.api.ValidationException: 
> 2020-05-20T16:36:33.9647354Z Field types of query result and registered 
> TableSink mypg.postgres.primitive_table2 do not match.
> 2020-05-20T16:36:33.9648233Z Query schema: [int: INT NOT NULL, EXPR$1: 
> VARBINARY(2147483647) NOT NULL, short: SMALLINT NOT NULL, EXPR$3: BIGINT, 
> EXPR$4: FLOAT, EXPR$5: DOUBLE, EXPR$6: DECIMAL(10, 5), EXPR$7: BOOLEAN, 
> EXPR$8: VARCHAR(2147483647), EXPR$9: CHAR(1) NOT NULL, EXPR$10: CHAR(1) NOT 
> NULL, EXPR$11: VARCHAR(20), EXPR$12: TIMESTAMP(5), EXPR$13: DATE, EXPR$14: 
> TIME(0), EXPR$15: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9650272Z Sink schema: [int: INT, bytea: 
> VARBINARY(2147483647), short: SMALLINT, long: BIGINT, real: FLOAT, 
> double_precision: DOUBLE, numeric: DECIMAL(10, 5), decimal: DECIMAL(10, 1), 
> boolean: BOOLEAN, text: VARCHAR(2147483647), char: CHAR(1), character: 
> CHAR(3), character_varying: VARCHAR(20), timestamp: TIMESTAMP(5), date: DATE, 
> time: TIME(0), default_numeric: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9651218Z  at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSchemaAndApplyImplicitCast(TableSinkUtils.scala:100)
> 2020-05-20T16:36:33.9651689Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:218)
> 2020-05-20T16:36:33.9652136Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9652936Z  at scala.Option.map(Option.scala:146)
> 2020-05-20T16:36:33.9653593Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9653993Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654428Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654841Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655221Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655759Z  at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> 2020-05-20T16:36:33.9656072Z  at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> 2020-05-20T16:36:33.9656413Z  at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> 2020-05-20T16:36:33.9656890Z  at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> 2020-05-20T16:36:33.9657211Z  at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9657525Z  at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> 2020-05-20T16:36:33.9657878Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9658350Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1217)
> 2020-05-20T16:36:33.9658784Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:663)
> 2020-05-20T16:36:33.9659391Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:750)
> 2020-05-20T16:36:33.9659856Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:653)
> 2020-05-20T16:36:33.9660507Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:27)
> 2020-05-20T16:36:33.9661115Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala)
> 2020-05-20T16:36:33.9661583Z  at 
> org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase.testGroupByInsert(PostgresCatalogITCase.java:88)
> {code}
> Full log: 
> https://dev.azure.com/sewen0794/19b23adf-d190-4fb4-ae6e-2e92b08923a3/_apis/build/builds/25/logs/93



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17657) jdbc not support read BIGINT UNSIGNED field

2020-05-20 Thread lun zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112698#comment-17112698
 ] 

lun zhang commented on FLINK-17657:
---

Hi [~JinxinTang]. Sorry I only can reproduce this in sql client.

> jdbc not support read BIGINT UNSIGNED field
> ---
>
> Key: FLINK-17657
> URL: https://issues.apache.org/jira/browse/FLINK-17657
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Client
>Affects Versions: 1.10.0, 1.10.1
>Reporter: lun zhang
>Priority: Major
> Attachments: env.yaml, excetion.txt
>
>
> I use sql client read mysql table, but I found I can't read a table contain 
> `BIGINT UNSIGNED` field. It will 
>  Caused by: java.lang.ClassCastException: java.math.BigInteger cannot be cast 
> to java.lang.Long
>  
> MySQL table:
>  
> create table tb
>  (
>  id BIGINT UNSIGNED auto_increment 
>  primary key,
>  cooper BIGINT(19) null ,
> user_sex VARCHAR(2) null 
> );
>  
> my env yaml is env.yaml .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17657) jdbc not support read BIGINT UNSIGNED field

2020-05-20 Thread lun zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112697#comment-17112697
 ] 

lun zhang commented on FLINK-17657:
---

Hi [~danny0405] , I don't know how to fix it.It's  need change flink sql 
support data type.

> jdbc not support read BIGINT UNSIGNED field
> ---
>
> Key: FLINK-17657
> URL: https://issues.apache.org/jira/browse/FLINK-17657
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Client
>Affects Versions: 1.10.0, 1.10.1
>Reporter: lun zhang
>Priority: Major
> Attachments: env.yaml, excetion.txt
>
>
> I use sql client read mysql table, but I found I can't read a table contain 
> `BIGINT UNSIGNED` field. It will 
>  Caused by: java.lang.ClassCastException: java.math.BigInteger cannot be cast 
> to java.lang.Long
>  
> MySQL table:
>  
> create table tb
>  (
>  id BIGINT UNSIGNED auto_increment 
>  primary key,
>  cooper BIGINT(19) null ,
> user_sex VARCHAR(2) null 
> );
>  
> my env yaml is env.yaml .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17657) jdbc not support read BIGINT UNSIGNED field

2020-05-20 Thread lun zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lun zhang updated FLINK-17657:
--
Fix Version/s: (was: 1.11.0)

> jdbc not support read BIGINT UNSIGNED field
> ---
>
> Key: FLINK-17657
> URL: https://issues.apache.org/jira/browse/FLINK-17657
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Client
>Affects Versions: 1.10.0, 1.10.1
>Reporter: lun zhang
>Priority: Major
> Attachments: env.yaml, excetion.txt
>
>
> I use sql client read mysql table, but I found I can't read a table contain 
> `BIGINT UNSIGNED` field. It will 
>  Caused by: java.lang.ClassCastException: java.math.BigInteger cannot be cast 
> to java.lang.Long
>  
> MySQL table:
>  
> create table tb
>  (
>  id BIGINT UNSIGNED auto_increment 
>  primary key,
>  cooper BIGINT(19) null ,
> user_sex VARCHAR(2) null 
> );
>  
> my env yaml is env.yaml .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12271: [FLINK-17854][core] Let SourceReader use InputStatus directly

2020-05-20 Thread GitBox


flinkbot edited a comment on pull request #12271:
URL: https://github.com/apache/flink/pull/12271#issuecomment-631792224


   
   ## CI report:
   
   * 195266eaabfc2f554a22cd6c79a553c69dcfea29 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1972)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12271: [FLINK-17854][core] Let SourceReader use InputStatus directly

2020-05-20 Thread GitBox


flinkbot edited a comment on pull request #12271:
URL: https://github.com/apache/flink/pull/12271#issuecomment-631792224


   
   ## CI report:
   
   * 195266eaabfc2f554a22cd6c79a553c69dcfea29 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1972)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17560) No Slots available exception in Apache Flink Job Manager while Scheduling

2020-05-20 Thread josson paul kalapparambath (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112667#comment-17112667
 ] 

josson paul kalapparambath commented on FLINK-17560:


[~xintongsong]

What are the chances that below code doesn't execute?. If this piece of code 
doesn't execute, there is a chance of slot not released. 

Can this happen if there is something wrong in the user code (actual 
tranformations). Or issues with blocked threads?.

https://github.com/apache/flink/blob/d54807ba10d0392a60663f030f9fe0bfa1c66754/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L840

> No Slots available exception in Apache Flink Job Manager while Scheduling
> -
>
> Key: FLINK-17560
> URL: https://issues.apache.org/jira/browse/FLINK-17560
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.8.3
> Environment: Flink verson 1.8.3
> Session cluster
>Reporter: josson paul kalapparambath
>Priority: Major
>
> Set up
> --
> Flink verson 1.8.3
> Zookeeper HA cluster
> 1 ResourceManager/Dispatcher (Same Node)
> 1 TaskManager
> 4 pipelines running with various parallelism's
> Issue
> --
> Occationally when the Job Manager gets restarted we noticed that all the 
> pipelines are not getting scheduled. The error that is reporeted by the Job 
> Manger is 'not enough slots are available'. This should not be the case 
> because task manager was deployed with sufficient slots for the number of 
> pipelines/parallelism we have.
> We further noticed that the slot report sent by the taskmanger contains solts 
> filled with old CANCELLED job Ids. I am not sure why the task manager still 
> holds the details of the old jobs. Thread dump on the task manager confirms 
> that old pipelines are not running.
> I am aware of https://issues.apache.org/jira/browse/FLINK-12865. But this is 
> not the issue happening in this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12271: [FLINK-17854][core] Let SourceReader use InputStatus directly

2020-05-20 Thread GitBox


flinkbot commented on pull request #12271:
URL: https://github.com/apache/flink/pull/12271#issuecomment-631792224


   
   ## CI report:
   
   * 195266eaabfc2f554a22cd6c79a553c69dcfea29 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12271: [FLINK-17854][core] Let SourceReader use InputStatus directly

2020-05-20 Thread GitBox


flinkbot commented on pull request #12271:
URL: https://github.com/apache/flink/pull/12271#issuecomment-631785385


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 195266eaabfc2f554a22cd6c79a553c69dcfea29 (Wed May 20 
23:11:59 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-17820) Memory threshold is ignored for channel state

2020-05-20 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112651#comment-17112651
 ] 

Roman Khachatryan edited comment on FLINK-17820 at 5/20/20, 11:10 PM:
--

The data to FsCheckpointStateOutputStream is written through DataOutputStream.

Without flushing DataOutputStream some data can be left in its buffer. Flushing 
DataOutputStream also flushes the underlying FsCheckpointStateOutputStream.

Although in current JDK DataOutputStream doesn't buffer data. Do you think we 
can rely on it?


was (Author: roman_khachatryan):
The data to FsCheckpointStateOutputStream is written through DataOutputStream.

Without flushing DataOutputStream some data can be left in its buffer. Flushing 
DataOutputStream also flushes the underlying FsCheckpointStateOutputStream.

Although in current JDK DataOutputStream doesn't buffer data, I think it's 
risky to rely on it.

 

> Memory threshold is ignored for channel state
> -
>
> Key: FLINK-17820
> URL: https://issues.apache.org/jira/browse/FLINK-17820
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Task
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Config parameter state.backend.fs.memory-threshold is ignored for channel 
> state. Causing each subtask to have a file per checkpoint. Regardless of the 
> size of channel state (of this subtask).
> This also causes slow cleanup and delays the next checkpoint.
>  
> The problem is that {{ChannelStateCheckpointWriter.finishWriteAndResult}} 
> calls flush(); which actually flushes the data on disk.
>  
> From FSDataOutputStream.flush Javadoc:
> A completed flush does not mean that the data is necessarily persistent. Data 
> persistence can is only assumed after calls to close() or sync().
>  
> Possible solutions:
> 1. not to flush in {{ChannelStateCheckpointWriter.finishWriteAndResult (which 
> can lead to data loss in a wrapping stream).}}
> {{2. change }}{{FsCheckpointStateOutputStream.flush behavior}}
> {{3. wrap }}{{FsCheckpointStateOutputStream to prevent flush}}{{}}{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17854) Use InputStatus directly in user-facing async input APIs (like source readers)

2020-05-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17854:
---
Labels: pull-request-available  (was: )

> Use InputStatus directly in user-facing async input APIs (like source readers)
> --
>
> Key: FLINK-17854
> URL: https://issues.apache.org/jira/browse/FLINK-17854
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> The flink-runtime uses the {{InputStatus}} enum in its 
> {{PushingAsyncDataInput}}.
> The flink-core {{SourceReader}} has a separate enum with the same purpose.
> In the {{SourceOperator}} we need to bridge between these two, which is 
> clumsy and a bit inefficient.
> We can simply make {{InputStatus}} part of {{flink-core}} I/O packages and 
> use it in the {{SourceReader}}, to avoid having to bridge it and the runtime 
> part.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] StephanEwen opened a new pull request #12271: [FLINK-17854][core] Let SourceReader use InputStatus directly

2020-05-20 Thread GitBox


StephanEwen opened a new pull request #12271:
URL: https://github.com/apache/flink/pull/12271


   ## What is the purpose of the change
   
   The `flink-runtime` uses the `InputStatus` enum in its 
`PushingAsyncDataInput`.
   The `SourceReader` in `flink-core` has a separate enum with the same purpose 
(1:1 correspondence).
   In the `SourceOperator` we need to bridge between these two, which is clumsy 
and a bit inefficient.
   
   This PR makes `InputStatus` a part of the `flink-core` I/O packages and uses 
it in the `SourceReader` directly, to avoid having to bridge it and the runtime 
part.
   
   ## Verifying this change
   
   This change is a simple refactoring with no changes in functionality.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **no**
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no**
 - The runtime per-record code paths (performance sensitive): **yes**
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: **no**
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
 - If yes, how is the feature documented? **not applicable**
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17808) Rename checkpoint meta file to "_metadata" until it has completed writing

2020-05-20 Thread Stephan Ewen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112652#comment-17112652
 ] 

Stephan Ewen commented on FLINK-17808:
--

I think this pattern (atomically visible file) may be quite common. We can add 
a method to {{FileSystem}} or add a flag to {{create(Path)}} to create a stream 
for a file that is only visible once the writing is complete.

That should be easy to implement
  - file:// will to that via rename
  - hdfs:// will do that via rename
  - s3:// does not need to do anything, it always behaves like that.
  - oss:// ? (not sure, but I would assume also does not need anything, object 
stores tend to publish at the end)

This avoids cases where components manually implement a rename-based solution 
(which breaks S3 and probably other object store visibility consistency)

> Rename checkpoint meta file to "_metadata" until it has completed writing
> -
>
> Key: FLINK-17808
> URL: https://issues.apache.org/jira/browse/FLINK-17808
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.10.0
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
> Fix For: 1.12.0
>
>
> In practice, some developers or customers would use some strategy to find the 
> recent _metadata as the checkpoint to recover (e.g as many proposals in 
> FLINK-9043 suggest). However, there existed a "_meatadata" file does not mean 
> the checkpoint have been completed as the writing to create the "_meatadata" 
> file could break as some force quit (e.g. yarn application -kill).
> We could create the checkpoint meta stream to write data to file named as 
> "_metadata.inprogress" and renamed it to "_metadata" once completed writing. 
> By doing so, we could ensure the "_metadata" is not broken.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17820) Memory threshold is ignored for channel state

2020-05-20 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112651#comment-17112651
 ] 

Roman Khachatryan commented on FLINK-17820:
---

The data to FsCheckpointStateOutputStream is written through DataOutputStream.

Without flushing DataOutputStream some data can be left in its buffer. Flushing 
DataOutputStream also flushes the underlying FsCheckpointStateOutputStream.

Although in current JDK DataOutputStream doesn't buffer data, I think it's 
risky to rely on it.

 

> Memory threshold is ignored for channel state
> -
>
> Key: FLINK-17820
> URL: https://issues.apache.org/jira/browse/FLINK-17820
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Task
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Config parameter state.backend.fs.memory-threshold is ignored for channel 
> state. Causing each subtask to have a file per checkpoint. Regardless of the 
> size of channel state (of this subtask).
> This also causes slow cleanup and delays the next checkpoint.
>  
> The problem is that {{ChannelStateCheckpointWriter.finishWriteAndResult}} 
> calls flush(); which actually flushes the data on disk.
>  
> From FSDataOutputStream.flush Javadoc:
> A completed flush does not mean that the data is necessarily persistent. Data 
> persistence can is only assumed after calls to close() or sync().
>  
> Possible solutions:
> 1. not to flush in {{ChannelStateCheckpointWriter.finishWriteAndResult (which 
> can lead to data loss in a wrapping stream).}}
> {{2. change }}{{FsCheckpointStateOutputStream.flush behavior}}
> {{3. wrap }}{{FsCheckpointStateOutputStream to prevent flush}}{{}}{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17854) Use InputStatus directly in user-facing async input APIs (like source readers)

2020-05-20 Thread Stephan Ewen (Jira)
Stephan Ewen created FLINK-17854:


 Summary: Use InputStatus directly in user-facing async input APIs 
(like source readers)
 Key: FLINK-17854
 URL: https://issues.apache.org/jira/browse/FLINK-17854
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.11.0


The flink-runtime uses the {{InputStatus}} enum in its 
{{PushingAsyncDataInput}}.
The flink-core {{SourceReader}} has a separate enum with the same purpose.

In the {{SourceOperator}} we need to bridge between these two, which is clumsy 
and a bit inefficient.

We can simply make {{InputStatus}} part of {{flink-core}} I/O packages and use 
it in the {{SourceReader}}, to avoid having to bridge it and the runtime part.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17808) Rename checkpoint meta file to "_metadata" until it has completed writing

2020-05-20 Thread Stephan Ewen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112647#comment-17112647
 ] 

Stephan Ewen commented on FLINK-17808:
--

We need to avoid renaming in checkpoints, because it causes 
visibility/consistency issues on some file systems.

We can instead do the following:
  - Use the RecoverableWriter (we don't need the recoverability, but we can use 
its committing feature)
  - Write a "latest checkpoint" file in the checkpoints root which points to 
the latest completed checkpoint

Option two would also be a simple way to implement a generic "resume latest" 
feature for the CLI.
It would not reliably work on all filesystems (for example not reliably on S3), 
but that would not be as bad as having inconsistent visibility of the 
checkpoint metadata file, which is used by ZK and externalized-checkpoint-based 
recovery.

> Rename checkpoint meta file to "_metadata" until it has completed writing
> -
>
> Key: FLINK-17808
> URL: https://issues.apache.org/jira/browse/FLINK-17808
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.10.0
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
> Fix For: 1.12.0
>
>
> In practice, some developers or customers would use some strategy to find the 
> recent _metadata as the checkpoint to recover (e.g as many proposals in 
> FLINK-9043 suggest). However, there existed a "_meatadata" file does not mean 
> the checkpoint have been completed as the writing to create the "_meatadata" 
> file could break as some force quit (e.g. yarn application -kill).
> We could create the checkpoint meta stream to write data to file named as 
> "_metadata.inprogress" and renamed it to "_metadata" once completed writing. 
> By doing so, we could ensure the "_metadata" is not broken.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17817) CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase

2020-05-20 Thread Stephan Ewen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112644#comment-17112644
 ] 

Stephan Ewen commented on FLINK-17817:
--

Is the fix going in asap? Otherwise we need to do a separate fix now (or 
deactivate the feature).
We cannot leave unstable tests for long, because they tend to mask other 
failures.

> CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase
> -
>
> Key: FLINK-17817
> URL: https://issues.apache.org/jira/browse/FLINK-17817
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Caizhi Weng
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1826&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=f83cd372-208c-5ec4-12a8-337462457129
> {code}
> 2020-05-19T10:34:18.3224679Z [ERROR] 
> testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase)
>   Time elapsed: 7.537 s  <<< ERROR!
> 2020-05-19T10:34:18.3225273Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2020-05-19T10:34:18.3227634Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:92)
> 2020-05-19T10:34:18.3228518Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:63)
> 2020-05-19T10:34:18.3229170Z  at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.addAll(Iterators.java:361)
> 2020-05-19T10:34:18.3229863Z  at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Lists.newArrayList(Lists.java:160)
> 2020-05-19T10:34:18.3230586Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
> 2020-05-19T10:34:18.3231303Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:141)
> 2020-05-19T10:34:18.3231996Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:107)
> 2020-05-19T10:34:18.3232847Z  at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable(AggregateReduceGroupingITCase.scala:176)
> 2020-05-19T10:34:18.3233694Z  at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg(AggregateReduceGroupingITCase.scala:122)
> 2020-05-19T10:34:18.3234461Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-19T10:34:18.3234983Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-19T10:34:18.3235632Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-19T10:34:18.3236615Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-19T10:34:18.3237256Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-19T10:34:18.3237965Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-19T10:34:18.3238750Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-19T10:34:18.3239314Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-19T10:34:18.3239838Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-19T10:34:18.3240362Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-05-19T10:34:18.3240803Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-19T10:34:18.3243624Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-19T10:34:18.3244531Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-19T10:34:18.3245325Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-19T10:34:18.3246086Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-19T10:34:18.3246765Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-19T10:34:18.3247390Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-19T10:34:18.3248012Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-19T10:34:18.3248779Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-19T10:34:18.3249417Z  at 
> org.junit.runners.ParentRunner$2.

[jira] [Commented] (FLINK-17820) Memory threshold is ignored for channel state

2020-05-20 Thread Stephan Ewen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112643#comment-17112643
 ] 

Stephan Ewen commented on FLINK-17820:
--

I would simply avoid flushing here. It makes no difference, persistence wise. 
Persistence is only guaranteed once you call {{closeAndGetHandle()}}.
There are few reasons ever to call flush on a stream (usually only when it is 
pipelined to a receiver and you want to make sure that the receiver receives 
buffered data, which I assume is not the case here, as this is no network 
stream).

(The reference to {{sync}} is not quite right in the JavaDocs. For some 
FileSystems, like S3, {{sync()}} does not help at all).

I would not change {{FsCheckpointStateOutputStream}}, this is a class that does 
exactly what it should do at the moment.

Wrapping also seems like unnecessary complexity, which we should avoid whenever 
possible.

> Memory threshold is ignored for channel state
> -
>
> Key: FLINK-17820
> URL: https://issues.apache.org/jira/browse/FLINK-17820
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Task
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Config parameter state.backend.fs.memory-threshold is ignored for channel 
> state. Causing each subtask to have a file per checkpoint. Regardless of the 
> size of channel state (of this subtask).
> This also causes slow cleanup and delays the next checkpoint.
>  
> The problem is that {{ChannelStateCheckpointWriter.finishWriteAndResult}} 
> calls flush(); which actually flushes the data on disk.
>  
> From FSDataOutputStream.flush Javadoc:
> A completed flush does not mean that the data is necessarily persistent. Data 
> persistence can is only assumed after calls to close() or sync().
>  
> Possible solutions:
> 1. not to flush in {{ChannelStateCheckpointWriter.finishWriteAndResult (which 
> can lead to data loss in a wrapping stream).}}
> {{2. change }}{{FsCheckpointStateOutputStream.flush behavior}}
> {{3. wrap }}{{FsCheckpointStateOutputStream to prevent flush}}{{}}{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] StephanEwen commented on pull request #12258: [FLINK-17820][task][checkpointing] Don't flush channel state to disk explicitly

2020-05-20 Thread GitBox


StephanEwen commented on pull request #12258:
URL: https://github.com/apache/flink/pull/12258#issuecomment-631766209


   I left some comments on the jira issue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17852) FlinkSQL support WITH clause in query statement

2020-05-20 Thread sam lin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112588#comment-17112588
 ] 

sam lin commented on FLINK-17852:
-

[~f.pompermaier]: Nope, this is for SELECT statement, the issue you pointed is 
CREATE statement. The name after WITH acts like a variable stores the resulting 
values of sub-sql.

> FlinkSQL support WITH clause in query statement
> ---
>
> Key: FLINK-17852
> URL: https://issues.apache.org/jira/browse/FLINK-17852
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: sam lin
>Priority: Major
>
> Many modern SQL language support WITH clause in query statement.  e.g.
> ```
> WITH myName AS (
>   select * from table where ...
> )
> select column1 from myName
> ```
> e.g. In BeamSQL supports this: 
> [https://beam.apache.org/documentation/dsls/sql/calcite/query-syntax/] 
> query_statement: [ WITH with_query_name AS ( query_expr ) [, ...] ] 
> query_expr.
> In presto, supports this as well: 
> [https://prestodb.io/docs/current/sql/select.html#with-clause]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17852) FlinkSQL support WITH clause in query statement

2020-05-20 Thread sam lin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sam lin updated FLINK-17852:

Description: 
Many modern SQL language support WITH clause in query statement.  e.g.

```

WITH myName AS (

  select * from table where ...

)

select column1 from myName

```

e.g. In BeamSQL supports this: 
[https://beam.apache.org/documentation/dsls/sql/calcite/query-syntax/] 

query_statement: [ WITH with_query_name AS ( query_expr ) [, ...] ] query_expr.

In presto, supports this as well: 
[https://prestodb.io/docs/current/sql/select.html#with-clause]

  was:
Many modern SQL language support WITH clause in query statement.  e.g.

```

WITH myName AS (

  select * from table where ...

)

select column1 from myName

```

e.g. In BeamSQL supports

`{{query_statement:
[ WITH with_query_name AS ( query_expr ) [, ...] ]
query_expr`. 
}}[https://beam.apache.org/documentation/dsls/sql/calcite/query-syntax/]


> FlinkSQL support WITH clause in query statement
> ---
>
> Key: FLINK-17852
> URL: https://issues.apache.org/jira/browse/FLINK-17852
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: sam lin
>Priority: Major
>
> Many modern SQL language support WITH clause in query statement.  e.g.
> ```
> WITH myName AS (
>   select * from table where ...
> )
> select column1 from myName
> ```
> e.g. In BeamSQL supports this: 
> [https://beam.apache.org/documentation/dsls/sql/calcite/query-syntax/] 
> query_statement: [ WITH with_query_name AS ( query_expr ) [, ...] ] 
> query_expr.
> In presto, supports this as well: 
> [https://prestodb.io/docs/current/sql/select.html#with-clause]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] igalshilman commented on a change in pull request #115: [FLINK-17518] [e2e] Add remote module E2E

2020-05-20 Thread GitBox


igalshilman commented on a change in pull request #115:
URL: https://github.com/apache/flink-statefun/pull/115#discussion_r428272246



##
File path: statefun-e2e-tests/pom.xml
##
@@ -63,11 +63,26 @@ under the License.
 1.6.0
 
 
+build-statefun-base-image
+
+exec
+
+false
 pre-integration-test
+
+
${user.dir}/tools/docker/build-stateful-functions.sh

Review comment:
   did you mean here `user.dir` or `project.root` ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17853) JobGraph is not getting deleted after Job cancelation

2020-05-20 Thread Fritz Budiyanto (Jira)
Fritz Budiyanto created FLINK-17853:
---

 Summary: JobGraph is not getting deleted after Job cancelation
 Key: FLINK-17853
 URL: https://issues.apache.org/jira/browse/FLINK-17853
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.9.2
 Environment: Flink 1.9.2
Zookeeper from AWS MSK
Reporter: Fritz Budiyanto
 Attachments: flinkissue.txt

I have been seeing this issue several time where JobGraph are not cleaned up 
properly after Job deletion. Job deletion is performed by using "flink stop" 
command. As a result JobGraph node lingering in ZK, when Flink cluster is 
restarted, it will attempt to do HA restoration on non existing checkpoint 
which prevent the Flink cluster to come up.




2020-05-19 19:56:21,471 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor 
- Un-registering task and sending final execution state FINISHED to JobManager 
for task Source: kafkaConsumer[update_server] -> 
(DetectedUpdateMessageConverter -> Sink: update_server.detected_updates, 
DrivenCoordinatesMessageConverter -> Sink: update_server.driven_coordinates) 
588902a8096f49845b09fa1f595d6065.
2020-05-19 19:56:21,622 INFO 
org.apache.flink.runtime.taskexecutor.slot.TaskSlotTable - Free slot 
TaskSlot(index:0, state:ACTIVE, resource profile: 
ResourceProfile\{cpuCores=1.7976931348623157E308, heapMemoryInMB=2147483647, 
directMemoryInMB=2147483647, nativeMemoryInMB=2147483647, 
networkMemoryInMB=2147483647, managedMemoryInMB=642}, allocationId: 
29f6a5f83c832486f2d7ebe5c779fa32, jobId: 86a028b3f7aada8ffe59859ca71d6385).
2020-05-19 19:56:21,622 INFO 
org.apache.flink.runtime.taskexecutor.JobLeaderService - Remove job 
86a028b3f7aada8ffe59859ca71d6385 from job leader monitoring.
2020-05-19 19:56:21,622 INFO 
org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - 
Stopping ZooKeeperLeaderRetrievalService 
/leader/86a028b3f7aada8ffe59859ca71d6385/job_manager_lock.
2020-05-19 19:56:21,623 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor 
- Close JobManager connection for job 86a028b3f7aada8ffe59859ca71d6385.
2020-05-19 19:56:21,624 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor 
- Close JobManager connection for job 86a028b3f7aada8ffe59859ca71d6385.
2020-05-19 19:56:21,624 INFO 
org.apache.flink.runtime.taskexecutor.JobLeaderService - Cannot reconnect to 
job 86a028b3f7aada8ffe59859ca71d6385 because it is not registered.


...
Zookeeper CLI:

ls /flink/cluster_update/jobgraphs
[86a028b3f7aada8ffe59859ca71d6385]

 

Attached is the Flink logs in reverse order



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16998) Add a changeflag to Row type

2020-05-20 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-16998:
-
Fix Version/s: 1.11.0

> Add a changeflag to Row type
> 
>
> Key: FLINK-16998
> URL: https://issues.apache.org/jira/browse/FLINK-16998
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> In Blink planner, the change flag of records travelling through the pipeline 
> are part of the record itself but not part of the logical schema. This 
> simplifies the architecture and API in many cases.
> Which is why we aim adopt the same mechanism for 
> {{org.apache.flink.types.Row}}.
> Take {{tableEnv.toRetractStream()}} as an example that returns either Scala 
> or Java {{Tuple2}}. For FLIP-95 we need to support more update 
> kinds than just a binary boolean.
> This means:
> - Add a changeflag {{RowKind}} to to {{Row}}
> - Update the {{Row.toString()}} method
> - Update serializers in backwards compatible way



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16998) Add a changeflag to Row type

2020-05-20 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112566#comment-17112566
 ] 

Timo Walther commented on FLINK-16998:
--

Backwards compatibility added.

Fixed in 1.11.0: 75a2b5fd2b8f3df36de219a055822c846143a6c3
Fixed in 1.12.0: 214b83776414857a1b2b7e2ffce7b9b697e812bd

> Add a changeflag to Row type
> 
>
> Key: FLINK-16998
> URL: https://issues.apache.org/jira/browse/FLINK-16998
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
>
> In Blink planner, the change flag of records travelling through the pipeline 
> are part of the record itself but not part of the logical schema. This 
> simplifies the architecture and API in many cases.
> Which is why we aim adopt the same mechanism for 
> {{org.apache.flink.types.Row}}.
> Take {{tableEnv.toRetractStream()}} as an example that returns either Scala 
> or Java {{Tuple2}}. For FLIP-95 we need to support more update 
> kinds than just a binary boolean.
> This means:
> - Add a changeflag {{RowKind}} to to {{Row}}
> - Update the {{Row.toString()}} method
> - Update serializers in backwards compatible way



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] twalthr commented on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer

2020-05-20 Thread GitBox


twalthr commented on pull request #12263:
URL: https://github.com/apache/flink/pull/12263#issuecomment-631688716


   Merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr closed pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer

2020-05-20 Thread GitBox


twalthr closed pull request #12263:
URL: https://github.com/apache/flink/pull/12263


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17852) FlinkSQL support WITH clause in query statement

2020-05-20 Thread Flavio Pompermaier (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112558#comment-17112558
 ] 

Flavio Pompermaier commented on FLINK-17852:


Can this be related to FLINK-17361 ?

> FlinkSQL support WITH clause in query statement
> ---
>
> Key: FLINK-17852
> URL: https://issues.apache.org/jira/browse/FLINK-17852
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: sam lin
>Priority: Major
>
> Many modern SQL language support WITH clause in query statement.  e.g.
> ```
> WITH myName AS (
>   select * from table where ...
> )
> select column1 from myName
> ```
> e.g. In BeamSQL supports
> `{{query_statement:
> [ WITH with_query_name AS ( query_expr ) [, ...] ]
> query_expr`. 
> }}[https://beam.apache.org/documentation/dsls/sql/calcite/query-syntax/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17852) FlinkSQL support WITH clause in query statement

2020-05-20 Thread sam lin (Jira)
sam lin created FLINK-17852:
---

 Summary: FlinkSQL support WITH clause in query statement
 Key: FLINK-17852
 URL: https://issues.apache.org/jira/browse/FLINK-17852
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Reporter: sam lin


Many modern SQL language support WITH clause in query statement.  e.g.

```

WITH myName AS (

  select * from table where ...

)

select column1 from myName

```

e.g. In BeamSQL supports

`{{query_statement:
[ WITH with_query_name AS ( query_expr ) [, ...] ]
query_expr`. 
}}[https://beam.apache.org/documentation/dsls/sql/calcite/query-syntax/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] sjwiesman closed pull request #106: [FLINK-16985] Stateful Functions Examples have their specific job names

2020-05-20 Thread GitBox


sjwiesman closed pull request #106:
URL: https://github.com/apache/flink-statefun/pull/106


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-16985) Specify job names for Stateful Functions examples

2020-05-20 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman resolved FLINK-16985.
--
Fix Version/s: (was: statefun-2.0.1)
   statefun-2.1.0
   Resolution: Fixed

> Specify job names for Stateful Functions examples
> -
>
> Key: FLINK-16985
> URL: https://issues.apache.org/jira/browse/FLINK-16985
> Project: Flink
>  Issue Type: Task
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: UnityLung
>Priority: Minor
>  Labels: pull-request-available
> Fix For: statefun-2.1.0
>
>
> The StateFun examples all use the default job name "StatefulFunctions".
> It would be nice if they had specific job names, like "Greeter Example" or 
> "Shopping Cart Example", etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] sjwiesman commented on pull request #106: [FLINK-16985] Stateful Functions Examples have their specific job names

2020-05-20 Thread GitBox


sjwiesman commented on pull request #106:
URL: https://github.com/apache/flink-statefun/pull/106#issuecomment-631677707


   Merging . . . 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17850) PostgresCatalogITCase . testGroupByInsert() fails on CI

2020-05-20 Thread Flavio Pompermaier (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112551#comment-17112551
 ] 

Flavio Pompermaier commented on FLINK-17850:


I'll try to look into it. This test was introduced by [~jark] during the PR 
review so I need to find out what's wrong with it

> PostgresCatalogITCase . testGroupByInsert() fails on CI
> ---
>
> Key: FLINK-17850
> URL: https://issues.apache.org/jira/browse/FLINK-17850
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Stephan Ewen
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0, 1.12.0
>
>
> {{org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase . 
> testGroupByInsert}}
> Error:
> {code}
> 2020-05-20T16:36:33.9647037Z org.apache.flink.table.api.ValidationException: 
> 2020-05-20T16:36:33.9647354Z Field types of query result and registered 
> TableSink mypg.postgres.primitive_table2 do not match.
> 2020-05-20T16:36:33.9648233Z Query schema: [int: INT NOT NULL, EXPR$1: 
> VARBINARY(2147483647) NOT NULL, short: SMALLINT NOT NULL, EXPR$3: BIGINT, 
> EXPR$4: FLOAT, EXPR$5: DOUBLE, EXPR$6: DECIMAL(10, 5), EXPR$7: BOOLEAN, 
> EXPR$8: VARCHAR(2147483647), EXPR$9: CHAR(1) NOT NULL, EXPR$10: CHAR(1) NOT 
> NULL, EXPR$11: VARCHAR(20), EXPR$12: TIMESTAMP(5), EXPR$13: DATE, EXPR$14: 
> TIME(0), EXPR$15: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9650272Z Sink schema: [int: INT, bytea: 
> VARBINARY(2147483647), short: SMALLINT, long: BIGINT, real: FLOAT, 
> double_precision: DOUBLE, numeric: DECIMAL(10, 5), decimal: DECIMAL(10, 1), 
> boolean: BOOLEAN, text: VARCHAR(2147483647), char: CHAR(1), character: 
> CHAR(3), character_varying: VARCHAR(20), timestamp: TIMESTAMP(5), date: DATE, 
> time: TIME(0), default_numeric: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9651218Z  at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSchemaAndApplyImplicitCast(TableSinkUtils.scala:100)
> 2020-05-20T16:36:33.9651689Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:218)
> 2020-05-20T16:36:33.9652136Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9652936Z  at scala.Option.map(Option.scala:146)
> 2020-05-20T16:36:33.9653593Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9653993Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654428Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654841Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655221Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655759Z  at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> 2020-05-20T16:36:33.9656072Z  at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> 2020-05-20T16:36:33.9656413Z  at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> 2020-05-20T16:36:33.9656890Z  at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> 2020-05-20T16:36:33.9657211Z  at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9657525Z  at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> 2020-05-20T16:36:33.9657878Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9658350Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1217)
> 2020-05-20T16:36:33.9658784Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:663)
> 2020-05-20T16:36:33.9659391Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:750)
> 2020-05-20T16:36:33.9659856Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:653)
> 2020-05-20T16:36:33.9660507Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:27)
> 2020-05-20T16:36:33.9661115Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala)
> 2020-05-20T16:36:33.9661583Z  at 
> org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase.testGroupByInsert(PostgresCatalogITCase.java:88)
> {code}
> Full log: 
> https://dev.azure.com/sewen0794/19b23adf-d190-4fb4-ae6e-2e92b0

[GitHub] [flink] flinkbot edited a comment on pull request #8885: [FLINK-12855] [streaming-java][window-assigners] Add functionality that staggers panes on partitions to distribute workload.

2020-05-20 Thread GitBox


flinkbot edited a comment on pull request #8885:
URL: https://github.com/apache/flink/pull/8885#issuecomment-513164005


   
   ## CI report:
   
   * 19a0f944f4c8b2177afe5b41df587b89daa0d008 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/119766396) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] igalshilman commented on pull request #106: [FLINK-16985] Stateful Functions Examples have their specific job names

2020-05-20 Thread GitBox


igalshilman commented on pull request #106:
URL: https://github.com/apache/flink-statefun/pull/106#issuecomment-631675361


   Hi @abc863377,
   No worries, thanks a lot for your first contribution to this project!
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] corydolphin commented on pull request #8885: [FLINK-12855] [streaming-java][window-assigners] Add functionality that staggers panes on partitions to distribute workload.

2020-05-20 Thread GitBox


corydolphin commented on pull request #8885:
URL: https://github.com/apache/flink/pull/8885#issuecomment-631671226


   @TengHu have you been successfully using this change in production? We've 
run into a similar challenge with window alignment, would love to see this 
change merged and hear about success/or challenges with it in your usage. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] igalshilman commented on pull request #114: [FLINK-17611] Add unix domain sockets to remote functions

2020-05-20 Thread GitBox


igalshilman commented on pull request #114:
URL: https://github.com/apache/flink-statefun/pull/114#issuecomment-631670579


   Hi @slinkydeveloper,
   I've omitted that commit since it included additional changes beside the 
test. (Docker files, print statements, etc')
   I've took the test code and refactored it a little and now it is included.
   
   All and all, I think that the PR is in a really good shape, I've validated 
it locally.
   
   @tzulitai when merging, feel free to squash everything to a single commit / 
re-arrange them a little bit, since some of the changes are already bundled 
anyways.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] abc863377 commented on pull request #106: [FLINK-16985] Stateful Functions Examples have their specific job names

2020-05-20 Thread GitBox


abc863377 commented on pull request #106:
URL: https://github.com/apache/flink-statefun/pull/106#issuecomment-631670413


   Hi, @igalshilman ,
   Sorry. I am a Git beginner.
   
   Now I removed the old commit. and updated the new statefun.flink-job-name 
configuration to the distribution template.
   
   Can you please review it?
   
   Thanks a lot.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-15330) HadoopS3RecoverableWriterITCase.testCallingDeleteObjectTwiceDoesNotThroughException() fails with AssertionError

2020-05-20 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-15330:
---
Priority: Critical  (was: Major)

> HadoopS3RecoverableWriterITCase.testCallingDeleteObjectTwiceDoesNotThroughException()
>  fails with AssertionError 
> 
>
> Key: FLINK-15330
> URL: https://issues.apache.org/jira/browse/FLINK-15330
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> {code:java}
> 13:35:08.288 [ERROR] 
> testCallingDeleteObjectTwiceDoesNotThroughException(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 1.604 s  <<< FAILURE!
> java.lang.AssertionError
>   at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testCallingDeleteObjectTwiceDoesNotThroughException(HadoopS3RecoverableWriterITCase.java:259)
>  {code}
> [https://travis-ci.org/apache/flink/jobs/626653169]
> [https://s3.amazonaws.com/flink-logs-us/travis-artifacts/apache/flink/41653/41653.6.tar.gz]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15330) HadoopS3RecoverableWriterITCase.testCallingDeleteObjectTwiceDoesNotThroughException() fails with AssertionError

2020-05-20 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-15330:
---
Fix Version/s: 1.11.0

> HadoopS3RecoverableWriterITCase.testCallingDeleteObjectTwiceDoesNotThroughException()
>  fails with AssertionError 
> 
>
> Key: FLINK-15330
> URL: https://issues.apache.org/jira/browse/FLINK-15330
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> {code:java}
> 13:35:08.288 [ERROR] 
> testCallingDeleteObjectTwiceDoesNotThroughException(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 1.604 s  <<< FAILURE!
> java.lang.AssertionError
>   at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testCallingDeleteObjectTwiceDoesNotThroughException(HadoopS3RecoverableWriterITCase.java:259)
>  {code}
> [https://travis-ci.org/apache/flink/jobs/626653169]
> [https://s3.amazonaws.com/flink-logs-us/travis-artifacts/apache/flink/41653/41653.6.tar.gz]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15330) HadoopS3RecoverableWriterITCase.testCallingDeleteObjectTwiceDoesNotThroughException() fails with AssertionError

2020-05-20 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112547#comment-17112547
 ] 

Robert Metzger commented on FLINK-15330:


{code}
2020-05-20T16:21:33.6126688Z [ERROR] 
testCallingDeleteObjectTwiceDoesNotThroughException(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
  Time elapsed: 5.664 s  <<< FAILURE!
2020-05-20T16:21:33.6127511Z java.lang.AssertionError
2020-05-20T16:21:33.6127760Zat org.junit.Assert.fail(Assert.java:86)
2020-05-20T16:21:33.6128075Zat org.junit.Assert.assertTrue(Assert.java:41)
2020-05-20T16:21:33.6133847Zat org.junit.Assert.assertFalse(Assert.java:64)
2020-05-20T16:21:33.6134369Zat org.junit.Assert.assertFalse(Assert.java:74)
2020-05-20T16:21:33.6135220Zat 
org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testCallingDeleteObjectTwiceDoesNotThroughException(HadoopS3RecoverableWriterITCase.java:259)
2020-05-20T16:21:33.6135988Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-05-20T16:21:33.6136654Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-05-20T16:21:33.6137558Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-20T16:21:33.6138209Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-05-20T16:21:33.6138827Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-05-20T16:21:33.6139556Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-05-20T16:21:33.6140288Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-05-20T16:21:33.6141028Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-05-20T16:21:33.6141618Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2020-05-20T16:21:33.6142062Zat 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2020-05-20T16:21:33.6142465Zat 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
2020-05-20T16:21:33.6142830Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-05-20T16:21:33.6143200Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2020-05-20T16:21:33.6143605Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2020-05-20T16:21:33.6144063Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2020-05-20T16:21:33.6144498Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2020-05-20T16:21:33.6144885Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2020-05-20T16:21:33.6145284Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2020-05-20T16:21:33.6145662Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2020-05-20T16:21:33.6146054Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2020-05-20T16:21:33.6146455Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2020-05-20T16:21:33.6147414Zat 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2020-05-20T16:21:33.6147963Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-05-20T16:21:33.6148359Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-05-20T16:21:33.6148720Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2020-05-20T16:21:33.6149119Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
2020-05-20T16:21:33.6149600Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
2020-05-20T16:21:33.6150073Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
2020-05-20T16:21:33.6150594Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
2020-05-20T16:21:33.6151094Zat 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
2020-05-20T16:21:33.6151595Zat 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
2020-05-20T16:21:33.6152062Zat 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
2020-05-20T16:21:33.6152486Zat 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
2020-05-20T16:21:33.6152740Z 
2020-05-20T16:21:34.0293226Z [INFO] 
2020-05-20T16:21:34.0293810Z [INFO] Results:
2020-05-20T16:21:34.0294929Z [INFO] 
2020-05-20T16:21:34.0295355Z [ERROR] Failures: 
2020-05-20T16:21:34.0296019Z [ERROR]   
HadoopS3RecoverableWriterITCase.testCallingDeleteObjectTwiceDoesNotThroughException:259
2020-05-20T16:21:34.0296583Z [INFO] 
{cod

[GitHub] [flink] flinkbot edited a comment on pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-20 Thread GitBox


flinkbot edited a comment on pull request #12269:
URL: https://github.com/apache/flink/pull/12269#issuecomment-631541996


   
   ## CI report:
   
   * 24c44fd00652a6b5859075b3afea1e4e9ca98445 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1960)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17850) PostgresCatalogITCase . testGroupByInsert() fails on CI

2020-05-20 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112546#comment-17112546
 ] 

Robert Metzger commented on FLINK-17850:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1962&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8

> PostgresCatalogITCase . testGroupByInsert() fails on CI
> ---
>
> Key: FLINK-17850
> URL: https://issues.apache.org/jira/browse/FLINK-17850
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Stephan Ewen
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0, 1.12.0
>
>
> {{org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase . 
> testGroupByInsert}}
> Error:
> {code}
> 2020-05-20T16:36:33.9647037Z org.apache.flink.table.api.ValidationException: 
> 2020-05-20T16:36:33.9647354Z Field types of query result and registered 
> TableSink mypg.postgres.primitive_table2 do not match.
> 2020-05-20T16:36:33.9648233Z Query schema: [int: INT NOT NULL, EXPR$1: 
> VARBINARY(2147483647) NOT NULL, short: SMALLINT NOT NULL, EXPR$3: BIGINT, 
> EXPR$4: FLOAT, EXPR$5: DOUBLE, EXPR$6: DECIMAL(10, 5), EXPR$7: BOOLEAN, 
> EXPR$8: VARCHAR(2147483647), EXPR$9: CHAR(1) NOT NULL, EXPR$10: CHAR(1) NOT 
> NULL, EXPR$11: VARCHAR(20), EXPR$12: TIMESTAMP(5), EXPR$13: DATE, EXPR$14: 
> TIME(0), EXPR$15: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9650272Z Sink schema: [int: INT, bytea: 
> VARBINARY(2147483647), short: SMALLINT, long: BIGINT, real: FLOAT, 
> double_precision: DOUBLE, numeric: DECIMAL(10, 5), decimal: DECIMAL(10, 1), 
> boolean: BOOLEAN, text: VARCHAR(2147483647), char: CHAR(1), character: 
> CHAR(3), character_varying: VARCHAR(20), timestamp: TIMESTAMP(5), date: DATE, 
> time: TIME(0), default_numeric: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9651218Z  at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSchemaAndApplyImplicitCast(TableSinkUtils.scala:100)
> 2020-05-20T16:36:33.9651689Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:218)
> 2020-05-20T16:36:33.9652136Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9652936Z  at scala.Option.map(Option.scala:146)
> 2020-05-20T16:36:33.9653593Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9653993Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654428Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654841Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655221Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655759Z  at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> 2020-05-20T16:36:33.9656072Z  at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> 2020-05-20T16:36:33.9656413Z  at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> 2020-05-20T16:36:33.9656890Z  at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> 2020-05-20T16:36:33.9657211Z  at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9657525Z  at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> 2020-05-20T16:36:33.9657878Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9658350Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1217)
> 2020-05-20T16:36:33.9658784Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:663)
> 2020-05-20T16:36:33.9659391Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:750)
> 2020-05-20T16:36:33.9659856Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:653)
> 2020-05-20T16:36:33.9660507Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:27)
> 2020-05-20T16:36:33.9661115Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala)
> 2020-05-20T16:36:33.9661583Z  at 
> org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase.testGroupByInsert(PostgresCatalogITCase.java:88)
> {code}
> Full log: 
> https://dev.azure.com/sewen0794/1

[jira] [Created] (FLINK-17851) FlinkKafkaProducerITCase.testResumeTransaction:108->assertRecord NoSuchElement

2020-05-20 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17851:
--

 Summary: 
FlinkKafkaProducerITCase.testResumeTransaction:108->assertRecord NoSuchElement
 Key: FLINK-17851
 URL: https://issues.apache.org/jira/browse/FLINK-17851
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1957&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8
{code}
2020-05-20T16:36:17.1033719Z [ERROR] Tests run: 10, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 68.936 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
2020-05-20T16:36:17.1035011Z [ERROR] 
testResumeTransaction(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
  Time elapsed: 11.918 s  <<< ERROR!
2020-05-20T16:36:17.1036296Z java.util.NoSuchElementException
2020-05-20T16:36:17.1036802Zat 
org.apache.kafka.common.utils.AbstractIterator.next(AbstractIterator.java:52)
2020-05-20T16:36:17.1037553Zat 
org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.getOnlyElement(Iterators.java:302)
2020-05-20T16:36:17.1038087Zat 
org.apache.flink.shaded.guava18.com.google.common.collect.Iterables.getOnlyElement(Iterables.java:289)
2020-05-20T16:36:17.1038654Zat 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.assertRecord(FlinkKafkaProducerITCase.java:201)
2020-05-20T16:36:17.1039239Zat 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testResumeTransaction(FlinkKafkaProducerITCase.java:108)
2020-05-20T16:36:17.1039727Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-05-20T16:36:17.1040131Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-05-20T16:36:17.1040575Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-20T16:36:17.1040989Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-05-20T16:36:17.1041515Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-05-20T16:36:17.1041989Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-05-20T16:36:17.1042468Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-05-20T16:36:17.1042908Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-05-20T16:36:17.1043497Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
2020-05-20T16:36:17.1044096Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
2020-05-20T16:36:17.1044833Zat 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
2020-05-20T16:36:17.1045171Zat java.lang.Thread.run(Thread.java:748)
2020-05-20T16:36:17.1045358Z 
2020-05-20T16:36:17.6145018Z [INFO] 
2020-05-20T16:36:17.6145477Z [INFO] Results:
2020-05-20T16:36:17.6145654Z [INFO] 
2020-05-20T16:36:17.6145838Z [ERROR] Errors: 
2020-05-20T16:36:17.6146898Z [ERROR]   
FlinkKafkaProducerITCase.testResumeTransaction:108->assertRecord:201 » 
NoSuchElement
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17850) PostgresCatalogITCase . testGroupByInsert() fails on CI

2020-05-20 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112541#comment-17112541
 ] 

Robert Metzger commented on FLINK-17850:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1963&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8

> PostgresCatalogITCase . testGroupByInsert() fails on CI
> ---
>
> Key: FLINK-17850
> URL: https://issues.apache.org/jira/browse/FLINK-17850
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Stephan Ewen
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0, 1.12.0
>
>
> {{org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase . 
> testGroupByInsert}}
> Error:
> {code}
> 2020-05-20T16:36:33.9647037Z org.apache.flink.table.api.ValidationException: 
> 2020-05-20T16:36:33.9647354Z Field types of query result and registered 
> TableSink mypg.postgres.primitive_table2 do not match.
> 2020-05-20T16:36:33.9648233Z Query schema: [int: INT NOT NULL, EXPR$1: 
> VARBINARY(2147483647) NOT NULL, short: SMALLINT NOT NULL, EXPR$3: BIGINT, 
> EXPR$4: FLOAT, EXPR$5: DOUBLE, EXPR$6: DECIMAL(10, 5), EXPR$7: BOOLEAN, 
> EXPR$8: VARCHAR(2147483647), EXPR$9: CHAR(1) NOT NULL, EXPR$10: CHAR(1) NOT 
> NULL, EXPR$11: VARCHAR(20), EXPR$12: TIMESTAMP(5), EXPR$13: DATE, EXPR$14: 
> TIME(0), EXPR$15: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9650272Z Sink schema: [int: INT, bytea: 
> VARBINARY(2147483647), short: SMALLINT, long: BIGINT, real: FLOAT, 
> double_precision: DOUBLE, numeric: DECIMAL(10, 5), decimal: DECIMAL(10, 1), 
> boolean: BOOLEAN, text: VARCHAR(2147483647), char: CHAR(1), character: 
> CHAR(3), character_varying: VARCHAR(20), timestamp: TIMESTAMP(5), date: DATE, 
> time: TIME(0), default_numeric: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9651218Z  at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSchemaAndApplyImplicitCast(TableSinkUtils.scala:100)
> 2020-05-20T16:36:33.9651689Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:218)
> 2020-05-20T16:36:33.9652136Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9652936Z  at scala.Option.map(Option.scala:146)
> 2020-05-20T16:36:33.9653593Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9653993Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654428Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654841Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655221Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655759Z  at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> 2020-05-20T16:36:33.9656072Z  at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> 2020-05-20T16:36:33.9656413Z  at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> 2020-05-20T16:36:33.9656890Z  at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> 2020-05-20T16:36:33.9657211Z  at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9657525Z  at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> 2020-05-20T16:36:33.9657878Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9658350Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1217)
> 2020-05-20T16:36:33.9658784Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:663)
> 2020-05-20T16:36:33.9659391Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:750)
> 2020-05-20T16:36:33.9659856Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:653)
> 2020-05-20T16:36:33.9660507Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:27)
> 2020-05-20T16:36:33.9661115Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala)
> 2020-05-20T16:36:33.9661583Z  at 
> org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase.testGroupByInsert(PostgresCatalogITCase.java:88)
> {code}
> Full log: 
> https://dev.azure.com/sewen0794/1

[GitHub] [flink] fpompermaier commented on pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints

2020-05-20 Thread GitBox


fpompermaier commented on pull request #11906:
URL: https://github.com/apache/flink/pull/11906#issuecomment-631659010


   @wuchong do you have an idea of why the testGroupByInsert is failing?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17444) StreamingFileSink Azure HadoopRecoverableWriter class missing.

2020-05-20 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-17444:
---
Affects Version/s: (was: 1.10.0)

> StreamingFileSink Azure HadoopRecoverableWriter class missing.
> --
>
> Key: FLINK-17444
> URL: https://issues.apache.org/jira/browse/FLINK-17444
> Project: Flink
>  Issue Type: New Feature
>  Components: FileSystems
>Reporter: Marie May
>Priority: Minor
>
> Hello, I was recently attempting to use the Streaming File Sink to store data 
> to Azure and get an exception error that it is missing the 
> HadoopRecoverableWriter. When I searched if anyone else had the issue I came 
> across this post here on [Stack Overflow 
> |[https://stackoverflow.com/questions/61246683/flink-streamingfilesink-on-azure-blob-storage]].
>  Seeing no one responded, I asked about it on the mailing list and was told 
> to submit the issue here.
> This is exception message they posted below but the stack overflow post goes 
> into more details of where they believe the issue comes from. 
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/flink/fs/azure/common/hadoop/HadoopRecoverableWriter   
>  
> at 
> org.apache.flink.fs.azure.common.hadoop.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
>  
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory$ClassLoaderFixingFileSystem.createRecoverableWriter(PluginFileSystemFactory.java:129)
>   
> at 
> org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
>  
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.(Buckets.java:117)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:288)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:402)
>   
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
> 
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>   
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:284)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1006)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>   
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>  
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) 
>  
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)   
>  
> at java.lang.Thread.run(Thread.java:748)  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17444) StreamingFileSink Azure HadoopRecoverableWriter class missing.

2020-05-20 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-17444:
---
Fix Version/s: (was: 1.11.0)

> StreamingFileSink Azure HadoopRecoverableWriter class missing.
> --
>
> Key: FLINK-17444
> URL: https://issues.apache.org/jira/browse/FLINK-17444
> Project: Flink
>  Issue Type: New Feature
>  Components: FileSystems
>Affects Versions: 1.10.0
>Reporter: Marie May
>Priority: Minor
>
> Hello, I was recently attempting to use the Streaming File Sink to store data 
> to Azure and get an exception error that it is missing the 
> HadoopRecoverableWriter. When I searched if anyone else had the issue I came 
> across this post here on [Stack Overflow 
> |[https://stackoverflow.com/questions/61246683/flink-streamingfilesink-on-azure-blob-storage]].
>  Seeing no one responded, I asked about it on the mailing list and was told 
> to submit the issue here.
> This is exception message they posted below but the stack overflow post goes 
> into more details of where they believe the issue comes from. 
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/flink/fs/azure/common/hadoop/HadoopRecoverableWriter   
>  
> at 
> org.apache.flink.fs.azure.common.hadoop.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
>  
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory$ClassLoaderFixingFileSystem.createRecoverableWriter(PluginFileSystemFactory.java:129)
>   
> at 
> org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
>  
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.(Buckets.java:117)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:288)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:402)
>   
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
> 
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>   
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:284)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1006)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>   
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>  
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) 
>  
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)   
>  
> at java.lang.Thread.run(Thread.java:748)  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17444) StreamingFileSink Azure HadoopRecoverableWriter class missing.

2020-05-20 Thread Kostas Kloudas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112523#comment-17112523
 ] 

Kostas Kloudas edited comment on FLINK-17444 at 5/20/20, 6:39 PM:
--

So from the discussion here, I think that this is not a Bug, but rather a 
feature to be added in future releases. I will change the type to  "New 
Feature" then, so that we do not block the 1.11 release on this.


was (Author: kkl0u):
So from the discussion here, I think that this is not a Bug, but rather a 
feature to be added in future releases. I will change the type to  "New 
Feature" then. 

> StreamingFileSink Azure HadoopRecoverableWriter class missing.
> --
>
> Key: FLINK-17444
> URL: https://issues.apache.org/jira/browse/FLINK-17444
> Project: Flink
>  Issue Type: New Feature
>  Components: FileSystems
>Reporter: Marie May
>Priority: Minor
>
> Hello, I was recently attempting to use the Streaming File Sink to store data 
> to Azure and get an exception error that it is missing the 
> HadoopRecoverableWriter. When I searched if anyone else had the issue I came 
> across this post here on [Stack Overflow 
> |[https://stackoverflow.com/questions/61246683/flink-streamingfilesink-on-azure-blob-storage]].
>  Seeing no one responded, I asked about it on the mailing list and was told 
> to submit the issue here.
> This is exception message they posted below but the stack overflow post goes 
> into more details of where they believe the issue comes from. 
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/flink/fs/azure/common/hadoop/HadoopRecoverableWriter   
>  
> at 
> org.apache.flink.fs.azure.common.hadoop.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
>  
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory$ClassLoaderFixingFileSystem.createRecoverableWriter(PluginFileSystemFactory.java:129)
>   
> at 
> org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
>  
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.(Buckets.java:117)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:288)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:402)
>   
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
> 
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>   
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:284)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1006)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>   
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>  
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) 
>  
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)   
>  
> at java.lang.Thread.run(Thread.java:748)  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17444) StreamingFileSink Azure HadoopRecoverableWriter class missing.

2020-05-20 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-17444:
---
Priority: Major  (was: Critical)

> StreamingFileSink Azure HadoopRecoverableWriter class missing.
> --
>
> Key: FLINK-17444
> URL: https://issues.apache.org/jira/browse/FLINK-17444
> Project: Flink
>  Issue Type: New Feature
>  Components: FileSystems
>Affects Versions: 1.10.0
>Reporter: Marie May
>Priority: Major
> Fix For: 1.11.0
>
>
> Hello, I was recently attempting to use the Streaming File Sink to store data 
> to Azure and get an exception error that it is missing the 
> HadoopRecoverableWriter. When I searched if anyone else had the issue I came 
> across this post here on [Stack Overflow 
> |[https://stackoverflow.com/questions/61246683/flink-streamingfilesink-on-azure-blob-storage]].
>  Seeing no one responded, I asked about it on the mailing list and was told 
> to submit the issue here.
> This is exception message they posted below but the stack overflow post goes 
> into more details of where they believe the issue comes from. 
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/flink/fs/azure/common/hadoop/HadoopRecoverableWriter   
>  
> at 
> org.apache.flink.fs.azure.common.hadoop.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
>  
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory$ClassLoaderFixingFileSystem.createRecoverableWriter(PluginFileSystemFactory.java:129)
>   
> at 
> org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
>  
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.(Buckets.java:117)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:288)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:402)
>   
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
> 
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>   
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:284)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1006)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>   
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>  
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) 
>  
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)   
>  
> at java.lang.Thread.run(Thread.java:748)  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17444) StreamingFileSink Azure HadoopRecoverableWriter class missing.

2020-05-20 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-17444:
---
Priority: Minor  (was: Major)

> StreamingFileSink Azure HadoopRecoverableWriter class missing.
> --
>
> Key: FLINK-17444
> URL: https://issues.apache.org/jira/browse/FLINK-17444
> Project: Flink
>  Issue Type: New Feature
>  Components: FileSystems
>Affects Versions: 1.10.0
>Reporter: Marie May
>Priority: Minor
> Fix For: 1.11.0
>
>
> Hello, I was recently attempting to use the Streaming File Sink to store data 
> to Azure and get an exception error that it is missing the 
> HadoopRecoverableWriter. When I searched if anyone else had the issue I came 
> across this post here on [Stack Overflow 
> |[https://stackoverflow.com/questions/61246683/flink-streamingfilesink-on-azure-blob-storage]].
>  Seeing no one responded, I asked about it on the mailing list and was told 
> to submit the issue here.
> This is exception message they posted below but the stack overflow post goes 
> into more details of where they believe the issue comes from. 
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/flink/fs/azure/common/hadoop/HadoopRecoverableWriter   
>  
> at 
> org.apache.flink.fs.azure.common.hadoop.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
>  
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory$ClassLoaderFixingFileSystem.createRecoverableWriter(PluginFileSystemFactory.java:129)
>   
> at 
> org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
>  
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.(Buckets.java:117)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:288)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:402)
>   
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
> 
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>   
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:284)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1006)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>   
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>  
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) 
>  
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)   
>  
> at java.lang.Thread.run(Thread.java:748)  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17444) StreamingFileSink Azure HadoopRecoverableWriter class missing.

2020-05-20 Thread Kostas Kloudas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112523#comment-17112523
 ] 

Kostas Kloudas commented on FLINK-17444:


So from the discussion here, I think that this is not a Bug, but rather a 
feature to be added in future releases. I will change the type to  "New 
Feature" then. 

> StreamingFileSink Azure HadoopRecoverableWriter class missing.
> --
>
> Key: FLINK-17444
> URL: https://issues.apache.org/jira/browse/FLINK-17444
> Project: Flink
>  Issue Type: New Feature
>  Components: FileSystems
>Affects Versions: 1.10.0
>Reporter: Marie May
>Priority: Critical
> Fix For: 1.11.0
>
>
> Hello, I was recently attempting to use the Streaming File Sink to store data 
> to Azure and get an exception error that it is missing the 
> HadoopRecoverableWriter. When I searched if anyone else had the issue I came 
> across this post here on [Stack Overflow 
> |[https://stackoverflow.com/questions/61246683/flink-streamingfilesink-on-azure-blob-storage]].
>  Seeing no one responded, I asked about it on the mailing list and was told 
> to submit the issue here.
> This is exception message they posted below but the stack overflow post goes 
> into more details of where they believe the issue comes from. 
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/flink/fs/azure/common/hadoop/HadoopRecoverableWriter   
>  
> at 
> org.apache.flink.fs.azure.common.hadoop.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
>  
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory$ClassLoaderFixingFileSystem.createRecoverableWriter(PluginFileSystemFactory.java:129)
>   
> at 
> org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
>  
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.(Buckets.java:117)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:288)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:402)
>   
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
> 
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>   
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:284)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1006)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>   
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>  
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) 
>  
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)   
>  
> at java.lang.Thread.run(Thread.java:748)  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17444) StreamingFileSink Azure HadoopRecoverableWriter class missing.

2020-05-20 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-17444:
---
Issue Type: New Feature  (was: Bug)

> StreamingFileSink Azure HadoopRecoverableWriter class missing.
> --
>
> Key: FLINK-17444
> URL: https://issues.apache.org/jira/browse/FLINK-17444
> Project: Flink
>  Issue Type: New Feature
>  Components: FileSystems
>Affects Versions: 1.10.0
>Reporter: Marie May
>Priority: Critical
> Fix For: 1.11.0
>
>
> Hello, I was recently attempting to use the Streaming File Sink to store data 
> to Azure and get an exception error that it is missing the 
> HadoopRecoverableWriter. When I searched if anyone else had the issue I came 
> across this post here on [Stack Overflow 
> |[https://stackoverflow.com/questions/61246683/flink-streamingfilesink-on-azure-blob-storage]].
>  Seeing no one responded, I asked about it on the mailing list and was told 
> to submit the issue here.
> This is exception message they posted below but the stack overflow post goes 
> into more details of where they believe the issue comes from. 
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/flink/fs/azure/common/hadoop/HadoopRecoverableWriter   
>  
> at 
> org.apache.flink.fs.azure.common.hadoop.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
>  
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory$ClassLoaderFixingFileSystem.createRecoverableWriter(PluginFileSystemFactory.java:129)
>   
> at 
> org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
>  
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.(Buckets.java:117)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:288)
>
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:402)
>   
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
> 
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>   
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:284)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1006)
> 
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>   
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>  
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) 
>  
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)   
>  
> at java.lang.Thread.run(Thread.java:748)  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17824) "Resuming Savepoint" e2e stalls indefinitely

2020-05-20 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17824:
---
Affects Version/s: 1.12.0

> "Resuming Savepoint" e2e stalls indefinitely 
> -
>
> Key: FLINK-17824
> URL: https://issues.apache.org/jira/browse/FLINK-17824
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Tests
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> CI; 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8
> {code}
> 2020-05-19T21:05:52.9696236Z 
> ==
> 2020-05-19T21:05:52.9696860Z Running 'Resuming Savepoint (file, async, scale 
> down) end-to-end test'
> 2020-05-19T21:05:52.9697243Z 
> ==
> 2020-05-19T21:05:52.9713094Z TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-52970362751
> 2020-05-19T21:05:53.1194478Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT
> 2020-05-19T21:05:53.2180375Z Starting cluster.
> 2020-05-19T21:05:53.9986167Z Starting standalonesession daemon on host 
> fv-az558.
> 2020-05-19T21:05:55.5997224Z Starting taskexecutor daemon on host fv-az558.
> 2020-05-19T21:05:55.6223837Z Waiting for Dispatcher REST endpoint to come 
> up...
> 2020-05-19T21:05:57.0552482Z Waiting for Dispatcher REST endpoint to come 
> up...
> 2020-05-19T21:05:57.9446865Z Waiting for Dispatcher REST endpoint to come 
> up...
> 2020-05-19T21:05:59.0098434Z Waiting for Dispatcher REST endpoint to come 
> up...
> 2020-05-19T21:06:00.0569710Z Dispatcher REST endpoint is up.
> 2020-05-19T21:06:07.7099937Z Job (a92a74de8446a80403798bb4806b73f3) is 
> running.
> 2020-05-19T21:06:07.7855906Z Waiting for job to process up to 200 records, 
> current progress: 114 records ...
> 2020-05-19T21:06:55.5755111Z 
> 2020-05-19T21:06:55.5756550Z 
> 
> 2020-05-19T21:06:55.5757225Z  The program finished with the following 
> exception:
> 2020-05-19T21:06:55.5757566Z 
> 2020-05-19T21:06:55.5765453Z org.apache.flink.util.FlinkException: Could not 
> stop with a savepoint job "a92a74de8446a80403798bb4806b73f3".
> 2020-05-19T21:06:55.5766873Z  at 
> org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:485)
> 2020-05-19T21:06:55.5767980Z  at 
> org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:854)
> 2020-05-19T21:06:55.5769014Z  at 
> org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:477)
> 2020-05-19T21:06:55.5770052Z  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:921)
> 2020-05-19T21:06:55.5771107Z  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:982)
> 2020-05-19T21:06:55.5772223Z  at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> 2020-05-19T21:06:55.5773325Z  at 
> org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:982)
> 2020-05-19T21:06:55.5774871Z Caused by: 
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> Coordinator is suspending.
> 2020-05-19T21:06:55.5777183Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2020-05-19T21:06:55.5778884Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> 2020-05-19T21:06:55.5779920Z  at 
> org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:483)
> 2020-05-19T21:06:55.5781175Z  ... 6 more
> 2020-05-19T21:06:55.5782391Z Caused by: 
> java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> Coordinator is suspending.
> 2020-05-19T21:06:55.5783885Z  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.lambda$stopWithSavepoint$9(SchedulerBase.java:890)
> 2020-05-19T21:06:55.5784992Z  at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
> 2020-05-19T21:06:55.5786492Z  at 
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
> 2020-05-19T21:06:55.5787601Z  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
> 2020-05-19T21:06:55.5788682Z  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcAct

[GitHub] [flink] rmetzger commented on pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints

2020-05-20 Thread GitBox


rmetzger commented on pull request #11906:
URL: https://github.com/apache/flink/pull/11906#issuecomment-631645770


   https://issues.apache.org/jira/browse/FLINK-17850



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints

2020-05-20 Thread GitBox


rmetzger commented on pull request #11906:
URL: https://github.com/apache/flink/pull/11906#issuecomment-631643954


   I believe this test is permanently failing on master: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1954&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-16795) End to end tests timeout on Azure

2020-05-20 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-16795.
--
Resolution: Fixed

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16795) End to end tests timeout on Azure

2020-05-20 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112505#comment-17112505
 ] 

Robert Metzger commented on FLINK-16795:


The reason for the slow Kerberized YARN test seems that downloading 12.5 MB of 
ubuntu packages lasted 10 minutes.

I will close this issue again.

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16636) TableEnvironmentITCase is crashing on Travis

2020-05-20 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112499#comment-17112499
 ] 

Robert Metzger commented on FLINK-16636:


Thanks a lot for looking into this so detailed. How about closing this ticket, 
as we are not using Travis for testing anymore?

If similar issues surface on Azure as well, we can reopen and address it by 
reducing the concurrency of the tests to 1, ok?

> TableEnvironmentITCase is crashing on Travis
> 
>
> Key: FLINK-16636
> URL: https://issues.apache.org/jira/browse/FLINK-16636
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Caizhi Weng
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Here is the instance and exception stack: 
> https://api.travis-ci.org/v3/job/663408376/log.txt
> But there is not too much helpful information there, maybe a accidental maven 
> problem.
> {code}
> 09:55:07.703 [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test 
> (integration-tests) on project flink-table-planner-blink_2.11: There are test 
> failures.
> 09:55:07.703 [ERROR] 
> 09:55:07.703 [ERROR] Please refer to 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire-reports
>  for the individual test results.
> 09:55:07.703 [ERROR] Please refer to dump files (if any exist) [date].dump, 
> [date]-jvmRun[N].dump and [date].dumpstream.
> 09:55:07.703 [ERROR] ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> 09:55:07.703 [ERROR] Command was /bin/sh -c cd 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target 
> && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=1 -XX:+UseG1GC -jar 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire/surefirebooter714252487017838305.jar
>  
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire
>  2020-03-17T09-34-41_826-jvmRun1 surefire4625103637332937565tmp 
> surefire_43192129054983363633tmp
> 09:55:07.703 [ERROR] Error occurred in starting fork, check output in log
> 09:55:07.703 [ERROR] Process Exit Code: 137
> 09:55:07.703 [ERROR] Crashed tests:
> 09:55:07.703 [ERROR] org.apache.flink.table.api.TableEnvironmentITCase
> 09:55:07.703 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 09:55:07.703 [ERROR] Command was /bin/sh -c cd 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target 
> && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=1 -XX:+UseG1GC -jar 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire/surefirebooter714252487017838305.jar
>  
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire
>  2020-03-17T09-34-41_826-jvmRun1 surefire4625103637332937565tmp 
> surefire_43192129054983363633tmp
> 09:55:07.703 [ERROR] Error occurred in starting fork, check output in log
> 09:55:07.703 [ERROR] Process Exit Code: 137
> 09:55:07.703 [ERROR] Crashed tests:
> 09:55:07.703 [ERROR] org.apache.flink.table.api.TableEnvironmentITCase
> 09:55:07.703 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkOnceMultiple(ForkStarter.java:382)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:297)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.lifecyc

[jira] [Commented] (FLINK-17850) PostgresCatalogITCase . testGroupByInsert() fails on CI

2020-05-20 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112495#comment-17112495
 ] 

Robert Metzger commented on FLINK-17850:


also on master: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1954&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8

> PostgresCatalogITCase . testGroupByInsert() fails on CI
> ---
>
> Key: FLINK-17850
> URL: https://issues.apache.org/jira/browse/FLINK-17850
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Stephan Ewen
>Priority: Blocker
> Fix For: 1.11.0
>
>
> {{org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase . 
> testGroupByInsert}}
> Error:
> {code}
> 2020-05-20T16:36:33.9647037Z org.apache.flink.table.api.ValidationException: 
> 2020-05-20T16:36:33.9647354Z Field types of query result and registered 
> TableSink mypg.postgres.primitive_table2 do not match.
> 2020-05-20T16:36:33.9648233Z Query schema: [int: INT NOT NULL, EXPR$1: 
> VARBINARY(2147483647) NOT NULL, short: SMALLINT NOT NULL, EXPR$3: BIGINT, 
> EXPR$4: FLOAT, EXPR$5: DOUBLE, EXPR$6: DECIMAL(10, 5), EXPR$7: BOOLEAN, 
> EXPR$8: VARCHAR(2147483647), EXPR$9: CHAR(1) NOT NULL, EXPR$10: CHAR(1) NOT 
> NULL, EXPR$11: VARCHAR(20), EXPR$12: TIMESTAMP(5), EXPR$13: DATE, EXPR$14: 
> TIME(0), EXPR$15: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9650272Z Sink schema: [int: INT, bytea: 
> VARBINARY(2147483647), short: SMALLINT, long: BIGINT, real: FLOAT, 
> double_precision: DOUBLE, numeric: DECIMAL(10, 5), decimal: DECIMAL(10, 1), 
> boolean: BOOLEAN, text: VARCHAR(2147483647), char: CHAR(1), character: 
> CHAR(3), character_varying: VARCHAR(20), timestamp: TIMESTAMP(5), date: DATE, 
> time: TIME(0), default_numeric: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9651218Z  at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSchemaAndApplyImplicitCast(TableSinkUtils.scala:100)
> 2020-05-20T16:36:33.9651689Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:218)
> 2020-05-20T16:36:33.9652136Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9652936Z  at scala.Option.map(Option.scala:146)
> 2020-05-20T16:36:33.9653593Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9653993Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654428Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654841Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655221Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655759Z  at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> 2020-05-20T16:36:33.9656072Z  at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> 2020-05-20T16:36:33.9656413Z  at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> 2020-05-20T16:36:33.9656890Z  at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> 2020-05-20T16:36:33.9657211Z  at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9657525Z  at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> 2020-05-20T16:36:33.9657878Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9658350Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1217)
> 2020-05-20T16:36:33.9658784Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:663)
> 2020-05-20T16:36:33.9659391Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:750)
> 2020-05-20T16:36:33.9659856Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:653)
> 2020-05-20T16:36:33.9660507Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:27)
> 2020-05-20T16:36:33.9661115Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala)
> 2020-05-20T16:36:33.9661583Z  at 
> org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase.testGroupByInsert(PostgresCatalogITCase.java:88)
> {code}
> Full log: 
> https://dev.azure.com/sewen0794/19b23adf-d190-4fb4-ae6e-2e92b0

[jira] [Updated] (FLINK-17850) PostgresCatalogITCase . testGroupByInsert() fails on CI

2020-05-20 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17850:
---
Labels: test-stability  (was: )

> PostgresCatalogITCase . testGroupByInsert() fails on CI
> ---
>
> Key: FLINK-17850
> URL: https://issues.apache.org/jira/browse/FLINK-17850
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Stephan Ewen
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0, 1.12.0
>
>
> {{org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase . 
> testGroupByInsert}}
> Error:
> {code}
> 2020-05-20T16:36:33.9647037Z org.apache.flink.table.api.ValidationException: 
> 2020-05-20T16:36:33.9647354Z Field types of query result and registered 
> TableSink mypg.postgres.primitive_table2 do not match.
> 2020-05-20T16:36:33.9648233Z Query schema: [int: INT NOT NULL, EXPR$1: 
> VARBINARY(2147483647) NOT NULL, short: SMALLINT NOT NULL, EXPR$3: BIGINT, 
> EXPR$4: FLOAT, EXPR$5: DOUBLE, EXPR$6: DECIMAL(10, 5), EXPR$7: BOOLEAN, 
> EXPR$8: VARCHAR(2147483647), EXPR$9: CHAR(1) NOT NULL, EXPR$10: CHAR(1) NOT 
> NULL, EXPR$11: VARCHAR(20), EXPR$12: TIMESTAMP(5), EXPR$13: DATE, EXPR$14: 
> TIME(0), EXPR$15: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9650272Z Sink schema: [int: INT, bytea: 
> VARBINARY(2147483647), short: SMALLINT, long: BIGINT, real: FLOAT, 
> double_precision: DOUBLE, numeric: DECIMAL(10, 5), decimal: DECIMAL(10, 1), 
> boolean: BOOLEAN, text: VARCHAR(2147483647), char: CHAR(1), character: 
> CHAR(3), character_varying: VARCHAR(20), timestamp: TIMESTAMP(5), date: DATE, 
> time: TIME(0), default_numeric: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9651218Z  at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSchemaAndApplyImplicitCast(TableSinkUtils.scala:100)
> 2020-05-20T16:36:33.9651689Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:218)
> 2020-05-20T16:36:33.9652136Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9652936Z  at scala.Option.map(Option.scala:146)
> 2020-05-20T16:36:33.9653593Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9653993Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654428Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654841Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655221Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655759Z  at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> 2020-05-20T16:36:33.9656072Z  at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> 2020-05-20T16:36:33.9656413Z  at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> 2020-05-20T16:36:33.9656890Z  at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> 2020-05-20T16:36:33.9657211Z  at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9657525Z  at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> 2020-05-20T16:36:33.9657878Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9658350Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1217)
> 2020-05-20T16:36:33.9658784Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:663)
> 2020-05-20T16:36:33.9659391Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:750)
> 2020-05-20T16:36:33.9659856Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:653)
> 2020-05-20T16:36:33.9660507Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:27)
> 2020-05-20T16:36:33.9661115Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala)
> 2020-05-20T16:36:33.9661583Z  at 
> org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase.testGroupByInsert(PostgresCatalogITCase.java:88)
> {code}
> Full log: 
> https://dev.azure.com/sewen0794/19b23adf-d190-4fb4-ae6e-2e92b08923a3/_apis/build/builds/25/logs/93



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17850) PostgresCatalogITCase . testGroupByInsert() fails on CI

2020-05-20 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17850:
---
Fix Version/s: 1.12.0

> PostgresCatalogITCase . testGroupByInsert() fails on CI
> ---
>
> Key: FLINK-17850
> URL: https://issues.apache.org/jira/browse/FLINK-17850
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Stephan Ewen
>Priority: Blocker
> Fix For: 1.11.0, 1.12.0
>
>
> {{org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase . 
> testGroupByInsert}}
> Error:
> {code}
> 2020-05-20T16:36:33.9647037Z org.apache.flink.table.api.ValidationException: 
> 2020-05-20T16:36:33.9647354Z Field types of query result and registered 
> TableSink mypg.postgres.primitive_table2 do not match.
> 2020-05-20T16:36:33.9648233Z Query schema: [int: INT NOT NULL, EXPR$1: 
> VARBINARY(2147483647) NOT NULL, short: SMALLINT NOT NULL, EXPR$3: BIGINT, 
> EXPR$4: FLOAT, EXPR$5: DOUBLE, EXPR$6: DECIMAL(10, 5), EXPR$7: BOOLEAN, 
> EXPR$8: VARCHAR(2147483647), EXPR$9: CHAR(1) NOT NULL, EXPR$10: CHAR(1) NOT 
> NULL, EXPR$11: VARCHAR(20), EXPR$12: TIMESTAMP(5), EXPR$13: DATE, EXPR$14: 
> TIME(0), EXPR$15: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9650272Z Sink schema: [int: INT, bytea: 
> VARBINARY(2147483647), short: SMALLINT, long: BIGINT, real: FLOAT, 
> double_precision: DOUBLE, numeric: DECIMAL(10, 5), decimal: DECIMAL(10, 1), 
> boolean: BOOLEAN, text: VARCHAR(2147483647), char: CHAR(1), character: 
> CHAR(3), character_varying: VARCHAR(20), timestamp: TIMESTAMP(5), date: DATE, 
> time: TIME(0), default_numeric: DECIMAL(38, 18)]
> 2020-05-20T16:36:33.9651218Z  at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSchemaAndApplyImplicitCast(TableSinkUtils.scala:100)
> 2020-05-20T16:36:33.9651689Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:218)
> 2020-05-20T16:36:33.9652136Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9652936Z  at scala.Option.map(Option.scala:146)
> 2020-05-20T16:36:33.9653593Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:193)
> 2020-05-20T16:36:33.9653993Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654428Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9654841Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655221Z  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9655759Z  at 
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> 2020-05-20T16:36:33.9656072Z  at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> 2020-05-20T16:36:33.9656413Z  at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> 2020-05-20T16:36:33.9656890Z  at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> 2020-05-20T16:36:33.9657211Z  at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> 2020-05-20T16:36:33.9657525Z  at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> 2020-05-20T16:36:33.9657878Z  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:152)
> 2020-05-20T16:36:33.9658350Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1217)
> 2020-05-20T16:36:33.9658784Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:663)
> 2020-05-20T16:36:33.9659391Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:750)
> 2020-05-20T16:36:33.9659856Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:653)
> 2020-05-20T16:36:33.9660507Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:27)
> 2020-05-20T16:36:33.9661115Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala)
> 2020-05-20T16:36:33.9661583Z  at 
> org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase.testGroupByInsert(PostgresCatalogITCase.java:88)
> {code}
> Full log: 
> https://dev.azure.com/sewen0794/19b23adf-d190-4fb4-ae6e-2e92b08923a3/_apis/build/builds/25/logs/93



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17721) AbstractHadoopFileSystemITTest .cleanupDirectoryWithRetry fails with AssertionError

2020-05-20 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112493#comment-17112493
 ] 

Robert Metzger commented on FLINK-17721:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1956&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8

> AbstractHadoopFileSystemITTest .cleanupDirectoryWithRetry fails with 
> AssertionError 
> 
>
> Key: FLINK-17721
> URL: https://issues.apache.org/jira/browse/FLINK-17721
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Reporter: Robert Metzger
>Assignee: Xintong Song
>Priority: Critical
> Fix For: 1.11.0
>
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1343&view=logs&j=961f8f81-6b52-53df-09f6-7291a2e4af6a&t=2f99feaa-7a9b-5916-4c1c-5e61f395079e
> {code}
> [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 34.079 s <<< FAILURE! - in 
> org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase
> [ERROR] org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase  Time elapsed: 
> 21.334 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.flink.runtime.fs.hdfs.AbstractHadoopFileSystemITTest.cleanupDirectoryWithRetry(AbstractHadoopFileSystemITTest.java:162)
>   at 
> org.apache.flink.runtime.fs.hdfs.AbstractHadoopFileSystemITTest.teardown(AbstractHadoopFileSystemITTest.java:149)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out

2020-05-20 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-17730.
--
Resolution: Fixed

> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart
>  times out
> 
>
> Key: FLINK-17730
> URL: https://issues.apache.org/jira/browse/FLINK-17730
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, FileSystems, Tests
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0, 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8
> After 5 minutes 
> {code}
> 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000]
> 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE
> 2020-05-15T06:56:38.1689028Z  at 
> java.net.SocketInputStream.socketRead0(Native Method)
> 2020-05-15T06:56:38.1689496Z  at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 2020-05-15T06:56:38.1689921Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> 2020-05-15T06:56:38.1690316Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> 2020-05-15T06:56:38.1690723Z  at 
> sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
> 2020-05-15T06:56:38.1691196Z  at 
> sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
> 2020-05-15T06:56:38.1691608Z  at 
> sun.security.ssl.InputRecord.read(InputRecord.java:532)
> 2020-05-15T06:56:38.1692023Z  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
> 2020-05-15T06:56:38.1692558Z  - locked <0xb94644f8> (a 
> java.lang.Object)
> 2020-05-15T06:56:38.1692946Z  at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
> 2020-05-15T06:56:38.1693371Z  at 
> sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> 2020-05-15T06:56:38.1694151Z  - locked <0xb9464d20> (a 
> sun.security.ssl.AppInputStream)
> 2020-05-15T06:56:38.1694908Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> 2020-05-15T06:56:38.1695475Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> 2020-05-15T06:56:38.1696007Z  at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> 2020-05-15T06:56:38.1696509Z  at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> 2020-05-15T06:56:38.1696993Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1697466Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1698069Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1698567Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699041Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699624Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1700090Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1700584Z  at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> 2020-05-15T06:56:38.1701282Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1701800Z  at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> 2020-05-15T06:56:38.1702328Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1702804Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445)
> 2020-05-15T06:56:38.1703270Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown 
> Source)
> 2020-05-15T06:56:38.1703677Z  at 
> org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> 2020-05-15T06:56:38.1704090Z  at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> 2020-05-15T06:56:38.1704607Z  at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1991724700.execute(Unknown Source)
> 2020-05-15T06:56:38.1705115Z  at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
> 2020-05-15T06:56:38.1705551Z  at 
> org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
> 2020-05

[jira] [Commented] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out

2020-05-20 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112491#comment-17112491
 ] 

Robert Metzger commented on FLINK-17730:


Merged to "release-1.11" in 
https://github.com/apache/flink/commit/e321e483fa0b3154e5e8417809bd669482783a13

> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart
>  times out
> 
>
> Key: FLINK-17730
> URL: https://issues.apache.org/jira/browse/FLINK-17730
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, FileSystems, Tests
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0, 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8
> After 5 minutes 
> {code}
> 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000]
> 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE
> 2020-05-15T06:56:38.1689028Z  at 
> java.net.SocketInputStream.socketRead0(Native Method)
> 2020-05-15T06:56:38.1689496Z  at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 2020-05-15T06:56:38.1689921Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> 2020-05-15T06:56:38.1690316Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> 2020-05-15T06:56:38.1690723Z  at 
> sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
> 2020-05-15T06:56:38.1691196Z  at 
> sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
> 2020-05-15T06:56:38.1691608Z  at 
> sun.security.ssl.InputRecord.read(InputRecord.java:532)
> 2020-05-15T06:56:38.1692023Z  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
> 2020-05-15T06:56:38.1692558Z  - locked <0xb94644f8> (a 
> java.lang.Object)
> 2020-05-15T06:56:38.1692946Z  at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
> 2020-05-15T06:56:38.1693371Z  at 
> sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> 2020-05-15T06:56:38.1694151Z  - locked <0xb9464d20> (a 
> sun.security.ssl.AppInputStream)
> 2020-05-15T06:56:38.1694908Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> 2020-05-15T06:56:38.1695475Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> 2020-05-15T06:56:38.1696007Z  at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> 2020-05-15T06:56:38.1696509Z  at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> 2020-05-15T06:56:38.1696993Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1697466Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1698069Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1698567Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699041Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699624Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1700090Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1700584Z  at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> 2020-05-15T06:56:38.1701282Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1701800Z  at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> 2020-05-15T06:56:38.1702328Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1702804Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445)
> 2020-05-15T06:56:38.1703270Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown 
> Source)
> 2020-05-15T06:56:38.1703677Z  at 
> org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> 2020-05-15T06:56:38.1704090Z  at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> 2020-05-15T06:56:38.1704607Z  at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1991724700.execute(Unknown Source)
> 2020-05-15T06:56:38.1705115Z  at 
> org.apache.hadoop.fs.s3a.I

[jira] [Updated] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out

2020-05-20 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17730:
---
Fix Version/s: 1.11.0

> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart
>  times out
> 
>
> Key: FLINK-17730
> URL: https://issues.apache.org/jira/browse/FLINK-17730
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, FileSystems, Tests
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0, 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8
> After 5 minutes 
> {code}
> 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000]
> 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE
> 2020-05-15T06:56:38.1689028Z  at 
> java.net.SocketInputStream.socketRead0(Native Method)
> 2020-05-15T06:56:38.1689496Z  at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 2020-05-15T06:56:38.1689921Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> 2020-05-15T06:56:38.1690316Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> 2020-05-15T06:56:38.1690723Z  at 
> sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
> 2020-05-15T06:56:38.1691196Z  at 
> sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
> 2020-05-15T06:56:38.1691608Z  at 
> sun.security.ssl.InputRecord.read(InputRecord.java:532)
> 2020-05-15T06:56:38.1692023Z  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
> 2020-05-15T06:56:38.1692558Z  - locked <0xb94644f8> (a 
> java.lang.Object)
> 2020-05-15T06:56:38.1692946Z  at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
> 2020-05-15T06:56:38.1693371Z  at 
> sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> 2020-05-15T06:56:38.1694151Z  - locked <0xb9464d20> (a 
> sun.security.ssl.AppInputStream)
> 2020-05-15T06:56:38.1694908Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> 2020-05-15T06:56:38.1695475Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> 2020-05-15T06:56:38.1696007Z  at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> 2020-05-15T06:56:38.1696509Z  at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> 2020-05-15T06:56:38.1696993Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1697466Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1698069Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1698567Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699041Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699624Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1700090Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1700584Z  at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> 2020-05-15T06:56:38.1701282Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1701800Z  at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> 2020-05-15T06:56:38.1702328Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1702804Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445)
> 2020-05-15T06:56:38.1703270Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown 
> Source)
> 2020-05-15T06:56:38.1703677Z  at 
> org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> 2020-05-15T06:56:38.1704090Z  at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> 2020-05-15T06:56:38.1704607Z  at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1991724700.execute(Unknown Source)
> 2020-05-15T06:56:38.1705115Z  at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
> 2020-05-15T06:56:38.1705551Z  at 
> org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
> 2

[GitHub] [flink] flinkbot edited a comment on pull request #12270: [FLINK-17822][MemMan] Use private Reference.tryHandlePending|waitForReferenceProcessing to trigger GC cleaner

2020-05-20 Thread GitBox


flinkbot edited a comment on pull request #12270:
URL: https://github.com/apache/flink/pull/12270#issuecomment-631585690


   
   ## CI report:
   
   * 5472f93e8f79940941b5ba6637be6810d6b8b46a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1965)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12244: [FLINK-17258][network] Fix couple of ITCases that were failing with enabled unaligned checkpoints

2020-05-20 Thread GitBox


flinkbot edited a comment on pull request #12244:
URL: https://github.com/apache/flink/pull/12244#issuecomment-630723509


   
   ## CI report:
   
   * d0d2042ea348654430863ccb51084c30714d8a47 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1959)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-17844) Activate japicmp-maven-plugin checks for @PublicEvolving between bug fix releases (x.y.u -> x.y.v)

2020-05-20 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-17844:


Assignee: Chesnay Schepler

> Activate japicmp-maven-plugin checks for @PublicEvolving between bug fix 
> releases (x.y.u -> x.y.v)
> --
>
> Key: FLINK-17844
> URL: https://issues.apache.org/jira/browse/FLINK-17844
> Project: Flink
>  Issue Type: New Feature
>  Components: Build System
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.11.0
>
>
> According to 
> https://lists.apache.org/thread.html/rc58099fb0e31d0eac951a7bbf7f8bda8b7b65c9ed0c04622f5333745%40%3Cdev.flink.apache.org%3E,
>  the community has decided to establish stricter API and binary stability 
> guarantees. Concretely, the community voted to guarantee API and binary 
> stability for {{@PublicEvolving}} annotated classes between bug fix release 
> (x.y.u -> x.y.v).
> Hence, I would suggest to activate this check by adding a new 
> {{japicmp-maven-plugin}} entry into Flink's {{pom.xml}} which checks for 
> {{@PublicEvolving}} classes between bug fix releases. We might have to update 
> the release guide to also include updating this configuration entry.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] pnowojski commented on a change in pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel

2020-05-20 Thread GitBox


pnowojski commented on a change in pull request #12261:
URL: https://github.com/apache/flink/pull/12261#discussion_r428179383



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
##
@@ -181,6 +181,14 @@ void retriggerSubpartitionRequest(int subpartitionIndex) 
throws IOException {
moreAvailable = !receivedBuffers.isEmpty();
}
 
+   if (next == null) {

Review comment:
   What do you mean @zhijiangW ? At first glance I would agree with 
@Jiayi-Liao, that it shouldn't happen after your fix in this commit in this 
class below.

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannelTest.java
##
@@ -1010,6 +1011,56 @@ public void testConcurrentRecycleAndRelease2() throws 
Exception {
}
}
 
+   @Test
+   public void testConcurrentGetNextBufferAndRelease() throws Exception {

Review comment:
   I think we could have tested this bug without concurrency/multi 
threading by just calling `releaseAllResources` before `getNextBuffer`. Test 
would be much easier to understand and debug and this would be worth a bit 
worse testing coverage - especially that we still have ITCases, and in the long 
run we are planning/hoping to make releasing resources go through mailbox.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16572) CheckPubSubEmulatorTest is flaky on Azure

2020-05-20 Thread Stephan Ewen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112454#comment-17112454
 ] 

Stephan Ewen commented on FLINK-16572:
--

One more instance: 
https://dev.azure.com/sewen0794/19b23adf-d190-4fb4-ae6e-2e92b08923a3/_apis/build/builds/26/logs/102

> CheckPubSubEmulatorTest is flaky on Azure
> -
>
> Key: FLINK-16572
> URL: https://issues.apache.org/jira/browse/FLINK-16572
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub, Tests
>Affects Versions: 1.11.0
>Reporter: Aljoscha Krettek
>Assignee: Richard Deurwaarder
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Log: 
> https://dev.azure.com/aljoschakrettek/Flink/_build/results?buildId=56&view=logs&j=1f3ed471-1849-5d3c-a34c-19792af4ad16&t=ce095137-3e3b-5f73-4b79-c42d3d5f8283&l=7842



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17850) PostgresCatalogITCase . testGroupByInsert() fails on CI

2020-05-20 Thread Stephan Ewen (Jira)
Stephan Ewen created FLINK-17850:


 Summary: PostgresCatalogITCase . testGroupByInsert() fails on CI
 Key: FLINK-17850
 URL: https://issues.apache.org/jira/browse/FLINK-17850
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Ecosystem
Affects Versions: 1.11.0
Reporter: Stephan Ewen
 Fix For: 1.11.0


{{org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase . 
testGroupByInsert}}


Error:
{code}
2020-05-20T16:36:33.9647037Z org.apache.flink.table.api.ValidationException: 
2020-05-20T16:36:33.9647354Z Field types of query result and registered 
TableSink mypg.postgres.primitive_table2 do not match.

2020-05-20T16:36:33.9648233Z Query schema: [int: INT NOT NULL, EXPR$1: 
VARBINARY(2147483647) NOT NULL, short: SMALLINT NOT NULL, EXPR$3: BIGINT, 
EXPR$4: FLOAT, EXPR$5: DOUBLE, EXPR$6: DECIMAL(10, 5), EXPR$7: BOOLEAN, EXPR$8: 
VARCHAR(2147483647), EXPR$9: CHAR(1) NOT NULL, EXPR$10: CHAR(1) NOT NULL, 
EXPR$11: VARCHAR(20), EXPR$12: TIMESTAMP(5), EXPR$13: DATE, EXPR$14: TIME(0), 
EXPR$15: DECIMAL(38, 18)]

2020-05-20T16:36:33.9650272Z Sink schema: [int: INT, bytea: 
VARBINARY(2147483647), short: SMALLINT, long: BIGINT, real: FLOAT, 
double_precision: DOUBLE, numeric: DECIMAL(10, 5), decimal: DECIMAL(10, 1), 
boolean: BOOLEAN, text: VARCHAR(2147483647), char: CHAR(1), character: CHAR(3), 
character_varying: VARCHAR(20), timestamp: TIMESTAMP(5), date: DATE, time: 
TIME(0), default_numeric: DECIMAL(38, 18)]

2020-05-20T16:36:33.9651218Zat 
org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSchemaAndApplyImplicitCast(TableSinkUtils.scala:100)
2020-05-20T16:36:33.9651689Zat 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:218)
2020-05-20T16:36:33.9652136Zat 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:193)
2020-05-20T16:36:33.9652936Zat scala.Option.map(Option.scala:146)
2020-05-20T16:36:33.9653593Zat 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:193)
2020-05-20T16:36:33.9653993Zat 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
2020-05-20T16:36:33.9654428Zat 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:152)
2020-05-20T16:36:33.9654841Zat 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
2020-05-20T16:36:33.9655221Zat 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
2020-05-20T16:36:33.9655759Zat 
scala.collection.Iterator$class.foreach(Iterator.scala:891)
2020-05-20T16:36:33.9656072Zat 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
2020-05-20T16:36:33.9656413Zat 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
2020-05-20T16:36:33.9656890Zat 
scala.collection.AbstractIterable.foreach(Iterable.scala:54)
2020-05-20T16:36:33.9657211Zat 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
2020-05-20T16:36:33.9657525Zat 
scala.collection.AbstractTraversable.map(Traversable.scala:104)
2020-05-20T16:36:33.9657878Zat 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:152)
2020-05-20T16:36:33.9658350Zat 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1217)
2020-05-20T16:36:33.9658784Zat 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:663)
2020-05-20T16:36:33.9659391Zat 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:750)
2020-05-20T16:36:33.9659856Zat 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:653)
2020-05-20T16:36:33.9660507Zat 
org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:27)
2020-05-20T16:36:33.9661115Zat 
org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala)
2020-05-20T16:36:33.9661583Zat 
org.apache.flink.connector.jdbc.catalog.PostgresCatalogITCase.testGroupByInsert(PostgresCatalogITCase.java:88)
{code}

Full log: 
https://dev.azure.com/sewen0794/19b23adf-d190-4fb4-ae6e-2e92b08923a3/_apis/build/builds/25/logs/93



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #11900: [FLINK-17284][jdbc][postgres] Support serial fields

2020-05-20 Thread GitBox


flinkbot edited a comment on pull request #11900:
URL: https://github.com/apache/flink/pull/11900#issuecomment-618914824


   
   ## CI report:
   
   * 69bce2717b0279a894aa66d15cd4b9b72cd5a474 UNKNOWN
   * 4cf97b2be4447c2d2f94259ad559fefb79a0a727 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1942)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12270: [FLINK-17822][MemMan] Use private Reference.tryHandlePending|waitForReferenceProcessing to trigger GC cleaner

2020-05-20 Thread GitBox


flinkbot edited a comment on pull request #12270:
URL: https://github.com/apache/flink/pull/12270#issuecomment-631585690


   
   ## CI report:
   
   * 5472f93e8f79940941b5ba6637be6810d6b8b46a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1965)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12244: [FLINK-17258][network] Fix couple of ITCases that were failing with enabled unaligned checkpoints

2020-05-20 Thread GitBox


flinkbot edited a comment on pull request #12244:
URL: https://github.com/apache/flink/pull/12244#issuecomment-630723509


   
   ## CI report:
   
   * 7fa2068a283b9471384248c1bf301e3d406b5f48 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1951)
 
   * d0d2042ea348654430863ccb51084c30714d8a47 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1959)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12268: [FLINK-17375] Refactor travis_watchdog.sh into separate ci and azure scripts.

2020-05-20 Thread GitBox


flinkbot edited a comment on pull request #12268:
URL: https://github.com/apache/flink/pull/12268#issuecomment-631512695


   
   ## CI report:
   
   * 4ed6888375869e654816264124703e72439c6148 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1955)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17849) YARNHighAvailabilityITCase hangs in Azure Pipelines CI

2020-05-20 Thread Stephan Ewen (Jira)
Stephan Ewen created FLINK-17849:


 Summary: YARNHighAvailabilityITCase hangs in Azure Pipelines CI
 Key: FLINK-17849
 URL: https://issues.apache.org/jira/browse/FLINK-17849
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN
Affects Versions: 1.11.0
Reporter: Stephan Ewen
 Fix For: 1.11.0


The test seems to hang for 15 minutes, then gets killed.

Full logs: 
https://dev.azure.com/sewen0794/19b23adf-d190-4fb4-ae6e-2e92b08923a3/_apis/build/builds/25/logs/121



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17848) Allow scala datastream api to create named sources

2020-05-20 Thread Zack Loebel-Begelman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112437#comment-17112437
 ] 

Zack Loebel-Begelman commented on FLINK-17848:
--

I'm happy to cut a PR, ideally backporting to 1.9 since that is what I'm 
currently using. 

Seems like we could either duplicate the code, or provide the same name (or 
maybe something different, like "Custom Scala Source") in the scala API. 

> Allow scala datastream api to create named sources
> --
>
> Key: FLINK-17848
> URL: https://issues.apache.org/jira/browse/FLINK-17848
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Scala
>Reporter: Zack Loebel-Begelman
>Priority: Major
>
> Currently you can only provide a custom name to a source operator from the 
> java API. There is no matching API on the scala API. It would be nice to 
> allow scala to also name sources something custom.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17848) Allow scala datastream api to create named sources

2020-05-20 Thread Zack Loebel-Begelman (Jira)
Zack Loebel-Begelman created FLINK-17848:


 Summary: Allow scala datastream api to create named sources
 Key: FLINK-17848
 URL: https://issues.apache.org/jira/browse/FLINK-17848
 Project: Flink
  Issue Type: Improvement
  Components: API / Scala
Reporter: Zack Loebel-Begelman


Currently you can only provide a custom name to a source operator from the java 
API. There is no matching API on the scala API. It would be nice to allow scala 
to also name sources something custom.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12270: [FLINK-17822][MemMan] Use private Reference.tryHandlePending|waitForReferenceProcessing to trigger GC cleaner

2020-05-20 Thread GitBox


flinkbot commented on pull request #12270:
URL: https://github.com/apache/flink/pull/12270#issuecomment-631585690


   
   ## CI report:
   
   * 5472f93e8f79940941b5ba6637be6810d6b8b46a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel

2020-05-20 Thread GitBox


zhijiangW commented on a change in pull request #12261:
URL: https://github.com/apache/flink/pull/12261#discussion_r428139107



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
##
@@ -181,6 +181,14 @@ void retriggerSubpartitionRequest(int subpartitionIndex) 
throws IOException {
moreAvailable = !receivedBuffers.isEmpty();
}
 
+   if (next == null) {

Review comment:
   I guess it can probably happen in practice. When the canceler thread 
already released the respective input channel, but the task thread might still 
call `getNextBuffer` in the case of released `receivedBuffers`, then it can get 
the `null` buffer.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12270: [FLINK-17822][MemMan] Use private Reference.tryHandlePending|waitForReferenceProcessing to trigger GC cleaner

2020-05-20 Thread GitBox


flinkbot commented on pull request #12270:
URL: https://github.com/apache/flink/pull/12270#issuecomment-631574892


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 5472f93e8f79940941b5ba6637be6810d6b8b46a (Wed May 20 
16:12:20 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12264: [FLINK-17558][netty] Release partitions asynchronously

2020-05-20 Thread GitBox


flinkbot edited a comment on pull request #12264:
URL: https://github.com/apache/flink/pull/12264#issuecomment-631349883


   
   ## CI report:
   
   * 19c5f57b94cc56b70002031618c32d9e6f68effb UNKNOWN
   * bb313e40f5a72dbf20cd0a8b48267063fd4f00af UNKNOWN
   * eafbd98c812227cb7d9ce7158de1a23309855509 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1948)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17826) Add missing custom query support on new jdbc connector

2020-05-20 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112398#comment-17112398
 ] 

Zhijiang commented on FLINK-17826:
--

If it is regarded as a bug fix, then it fits the rule to make it in 
release-1.11. We should adjust the type field as bug in Jira board accordingly.

> Add missing custom query support on new jdbc connector
> --
>
> Key: FLINK-17826
> URL: https://issues.apache.org/jira/browse/FLINK-17826
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Jark Wu
>Assignee: Flavio Pompermaier
>Priority: Major
> Fix For: 1.11.0
>
>
> In FLINK-17361, we added custom query on JDBC tables, but missing to add the 
> same ability on new jdbc connector (i.e. 
> {{JdbcDynamicTableSourceSinkFactory}}). 
> In the new jdbc connector, maybe we should call it {{scan.query}} to keep 
> consistent with other scan options, besides we need to make {{"table-name"}} 
> optional, but add validation that "table-name" and "scan.query" shouldn't 
> both be empty, and "table-name" must not be empty when used as sink.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] azagrebin opened a new pull request #12270: [FLINK-17822][MemMan] Use private Reference.tryHandlePending|waitForReferenceProcessing to trigger GC cleaner

2020-05-20 Thread GitBox


azagrebin opened a new pull request #12270:
URL: https://github.com/apache/flink/pull/12270


   ## What is the purpose of the change
   
   #11109 did not export `jdk.internal.misc` package used by reflection for GC 
cleaners of managed memory. Our PR CI does not run Java 11 tests atm. The 
package has to be exported by a JVM runtime arg: `--add-opens 
java.base/jdk.internal.misc=ALL-UNNAMED;`
   
   If this arg is set for Java 8, it fails the JVM process. Therefore, the fix 
is complicated as we have to do it also for e.g. Yarn CLI where client and 
cluster may run different Java versions.
   
   This PR suggests a quicker fix. We call directly the private method (has to 
be made accessible via reflection):
   - java.lang.ref.Reference.tryHandlePending(false) // for Java 8
   - java.lang.ref.Reference.waitForReferenceProcessing() // for Java 11
   
   Unfortunately, this leads to the annoying warning for Java 11 about Illegal 
reflective access.
   We can do the quick fix and think how to tackle the warning in a follow-up.
   
   ## Verifying this change
   
   Existing unit tests but also running them and e2e tests with Java 11.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11

2020-05-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17822:
---
Labels: pull-request-available test-stability  (was: test-stability)

> Nightly Flink CLI end-to-end test failed with 
> "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
> jdk.internal.misc.SharedSecrets" in Java 11 
> --
>
> Key: FLINK-17822
> URL: https://issues.apache.org/jira/browse/FLINK-17822
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: Andrey Zagrebin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> Instance: 
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600
> {code}
> 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
> org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
> UNEXPECTED - Failed to invoke waitForReferenceProcessing
> 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot 
> access class jdk.internal.misc.SharedSecrets (in module java.base) because 
> module java.base does not export jdk.internal.misc to unnamed module @54e3658c
> 2020-05-19T21:59:39.8830707Z  at 
> jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
>  ~[?:?]
> 2020-05-19T21:59:39.8831166Z  at 
> java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) 
> ~[?:?]
> 2020-05-19T21:59:39.8831744Z  at 
> java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
> 2020-05-19T21:59:39.8832596Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8833667Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8834712Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8835686Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8836652Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8838033Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8839259Z  at 
> org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8840148Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841035Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841603Z  at 
> java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
> ~[?:?]
> 2020-05-19T21:59:39.8842069Z  at 
> java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799)
>  ~[?:?]
> 2020-05-19T21:59:39.8842844Z  at 
> java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
> ~[?:?]
> 2020-05-19T21:59:39.8843828Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8844790Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8845754Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8846842Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8847711Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967)
>  ~[flink-dist_2.11-1.12-SNAPSH

[jira] [Updated] (FLINK-17847) ArrayIndexOutOfBoundsException happens when codegen StreamExec operator

2020-05-20 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-17847:
---
Description: 
user case:
{code:java}
//source table 
create table json_table( 
w_es BIGINT, 
w_type STRING, 
w_isDdl BOOLEAN,
 w_data ARRAY>,
 w_ts TIMESTAMP(3), 
w_table STRING) WITH (
  'connector.type' = 'kafka',
  'connector.version' = '0.10',
  'connector.topic' = 'json-test2',
  'connector.properties.zookeeper.connect' = 'localhost:2181',
  'connector.properties.bootstrap.servers' = 'localhost:9092',
  'connector.properties.group.id' = 'test-jdbc',
  'connector.startup-mode' = 'earliest-offset',
  'format.type' = 'json',
  'format.derive-schema' = 'true'
)
// real data:
{"w_es":1589870637000,"w_type":"INSERT","w_isDdl":false,"w_data":[{"pay_info":"channelId=82&onlineFee=89.0&outTradeNo=0&payId=0&payType=02&rechargeId=4&totalFee=89.0&tradeStatus=success&userId=32590183789575&sign=00","online_fee":"89.0","sign":"00","account_pay_fee":"0.0"}],"w_ts":"2020-05-20T13:58:37.131Z","w_table":"111"}


//query
select w_ts, 'test' as city1_id,  w_data[0].pay_info AS cate3_id,
 w_data as pay_order_id from json_table

{code}
~exception:~
{code:java}
//
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1427848Caused by: 
java.lang.ArrayIndexOutOfBoundsException: 1427848 at 
org.apache.flink.table.runtime.util.SegmentsUtil.getByteMultiSegments(SegmentsUtil.java:598)
 at 
org.apache.flink.table.runtime.util.SegmentsUtil.getByte(SegmentsUtil.java:590) 
at 
org.apache.flink.table.runtime.util.SegmentsUtil.bitGet(SegmentsUtil.java:534) 
at org.apache.flink.table.dataformat.BinaryArray.isNullAt(BinaryArray.java:117) 
at StreamExecCalc$10.processElement(Unknown Source)
{code}
 

Looks like in the codegen StreamExecCalc$10 operator some operation visit a 
'-1' index which should be wrong, this bug exits both in 1.10 and 1.11

 
{code:java}
public class StreamExecCalc$10 extends 
org.apache.flink.table.runtime.operators.AbstractProcessStreamOperator
implements org.apache.flink.streaming.api.operators.OneInputStreamOperator {

private final Object[] references;

private final org.apache.flink.table.dataformat.BinaryString str$3 = 
org.apache.flink.table.dataformat.BinaryString.fromString("test");

private transient 
org.apache.flink.table.runtime.typeutils.BaseArraySerializer typeSerializer$5;
final org.apache.flink.table.dataformat.BoxedWrapperRow out = new 
org.apache.flink.table.dataformat.BoxedWrapperRow(4);
private final org.apache.flink.streaming.runtime.streamrecord.StreamRecord 
outElement = new 
org.apache.flink.streaming.runtime.streamrecord.StreamRecord(null);

public StreamExecCalc$10(
Object[] references,
org.apache.flink.streaming.runtime.tasks.StreamTask task,
org.apache.flink.streaming.api.graph.StreamConfig config,
org.apache.flink.streaming.api.operators.Output output) throws 
Exception {
this.references = references;
typeSerializer$5 = 
(((org.apache.flink.table.runtime.typeutils.BaseArraySerializer) 
references[0]));
this.setup(task, config, output);
}

@Override
public void open() throws Exception {
super.open();

}

@Override
public void 
processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord 
element) throws Exception {
org.apache.flink.table.dataformat.BaseRow in1 = 
(org.apache.flink.table.dataformat.BaseRow) element.getValue();

org.apache.flink.table.dataformat.SqlTimestamp field$2;
boolean isNull$2;
org.apache.flink.table.dataformat.BaseArray field$4;
boolean isNull$4;
org.apache.flink.table.dataformat.BaseArray field$6;
org.apache.flink.table.dataformat.BinaryString field$8;
boolean isNull$8;
org.apache.flink.table.dataformat.BinaryString result$9;
boolean isNull$9;

isNull$2 = in1.isNullAt(4);
field$2 = null;
if (!isNull$2) {
field$2 = in1.getTimestamp(4, 3);
}

isNull$4 = in1.isNullAt(3);
field$4 = null;
if (!isNull$4) {
field$4 = in1.getArray(3);
}
field$6 = field$4;
if (!isNull$4) {
field$6 = (org.apache.flink.table.dataformat.BaseArray) 
(typeSerializer$5.copy(field$6));
}
out.setHeader(in1.getHeader());
if (isNull$2) {
out.setNullAt(0);
} else {
out.setNonPrimitiveValue(0, field$2);
}

if (false) {
out.setNullAt(1);
} else {
out.setNonPrimitiveValue(1, 
((org.apache.flink.table.dataformat.BinaryString) str$3));
}

boolean isNull$7 = isNull$4 || false || field$6.isNullAt(((int) 0) - 1);
org.apache.flink.table.dataformat.BaseRow result$7 = isNull$7 ? null : 
field$6.getRow(((int) 0) - 1, 4);

if (isNull$7) {
result$9 = 

[jira] [Updated] (FLINK-17847) ArrayIndexOutOfBoundsException happens when codegen StreamExec operator

2020-05-20 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-17847:
---
Summary: ArrayIndexOutOfBoundsException happens when codegen StreamExec 
operator  (was: ArrayIndexOutOfBoundsException happened in when codegen 
StreamExec operator)

> ArrayIndexOutOfBoundsException happens when codegen StreamExec operator
> ---
>
> Key: FLINK-17847
> URL: https://issues.apache.org/jira/browse/FLINK-17847
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.10.0, 1.11.0
>
>
> user's query:
>  
> {code:java}
> //source table 
> create table json_table( 
> w_es BIGINT, 
> w_type STRING, 
> w_isDdl BOOLEAN,
>  w_data ARRAY account_pay_fee DOUBLE>>,
>  w_ts TIMESTAMP(3), 
> w_table STRING) WITH (
>   'connector.type' = 'kafka',
>   'connector.version' = '0.10',
>   'connector.topic' = 'json-test2',
>   'connector.properties.zookeeper.connect' = 'localhost:2181',
>   'connector.properties.bootstrap.servers' = 'localhost:9092',
>   'connector.properties.group.id' = 'test-jdbc',
>   'connector.startup-mode' = 'earliest-offset',
>   'format.type' = 'json',
>   'format.derive-schema' = 'true'
> )
> // real data:
> {"w_es":1589870637000,"w_type":"INSERT","w_isDdl":false,"w_data":[{"pay_info":"channelId=82&onlineFee=89.0&outTradeNo=0&payId=0&payType=02&rechargeId=4&totalFee=89.0&tradeStatus=success&userId=32590183789575&sign=00","online_fee":"89.0","sign":"00","account_pay_fee":"0.0"}],"w_ts":"2020-05-20T13:58:37.131Z","w_table":"111"}
> //query
> select w_ts, 'test' as city1_id,  w_data[0].pay_info AS cate3_id,
>  w_data as pay_order_id from json_table
> {code}
>  
> ~exception:~
>  
>  
> {code:java}
> //
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1427848Caused by: 
> java.lang.ArrayIndexOutOfBoundsException: 1427848 at 
> org.apache.flink.table.runtime.util.SegmentsUtil.getByteMultiSegments(SegmentsUtil.java:598)
>  at 
> org.apache.flink.table.runtime.util.SegmentsUtil.getByte(SegmentsUtil.java:590)
>  at 
> org.apache.flink.table.runtime.util.SegmentsUtil.bitGet(SegmentsUtil.java:534)
>  at 
> org.apache.flink.table.dataformat.BinaryArray.isNullAt(BinaryArray.java:117) 
> at StreamExecCalc$10.processElement(Unknown Source)
> {code}
>  
>  
> looks like in the codegen StreamExecCalc$10 operator some operation visit a 
> '-1' index which should be wrong, this bug exits both in 1.10 and 1.11
>  
> {code:java}
> public class StreamExecCalc$10 extends 
> org.apache.flink.table.runtime.operators.AbstractProcessStreamOperator
> implements 
> org.apache.flink.streaming.api.operators.OneInputStreamOperator {
> private final Object[] references;
> private final org.apache.flink.table.dataformat.BinaryString str$3 = 
> org.apache.flink.table.dataformat.BinaryString.fromString("test");
> private transient 
> org.apache.flink.table.runtime.typeutils.BaseArraySerializer typeSerializer$5;
> final org.apache.flink.table.dataformat.BoxedWrapperRow out = new 
> org.apache.flink.table.dataformat.BoxedWrapperRow(4);
> private final 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord outElement = new 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord(null);
> public StreamExecCalc$10(
> Object[] references,
> org.apache.flink.streaming.runtime.tasks.StreamTask task,
> org.apache.flink.streaming.api.graph.StreamConfig config,
> org.apache.flink.streaming.api.operators.Output output) throws 
> Exception {
> this.references = references;
> typeSerializer$5 = 
> (((org.apache.flink.table.runtime.typeutils.BaseArraySerializer) 
> references[0]));
> this.setup(task, config, output);
> }
> @Override
> public void open() throws Exception {
> super.open();
> }
> @Override
> public void 
> processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord 
> element) throws Exception {
> org.apache.flink.table.dataformat.BaseRow in1 = 
> (org.apache.flink.table.dataformat.BaseRow) element.getValue();
> org.apache.flink.table.dataformat.SqlTimestamp field$2;
> boolean isNull$2;
> org.apache.flink.table.dataformat.BaseArray field$4;
> boolean isNull$4;
> org.apache.flink.table.dataformat.BaseArray field$6;
> org.apache.flink.table.dataformat.BinaryString field$8;
> boolean isNull$8;
> org.apache.flink.table.dataformat.BinaryString result$9;
> boolean isNull$9;
> isNull$2 = in1.isNullAt(4);
> field$2 = null;
> if (!isNull$2) {
> field$2 = in1.getTimestamp(4, 3);
> }
> isNul

[jira] [Commented] (FLINK-17817) CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase

2020-05-20 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17112389#comment-17112389
 ] 

Dawid Wysakowicz commented on FLINK-17817:
--

another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1946&view=results

> CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase
> -
>
> Key: FLINK-17817
> URL: https://issues.apache.org/jira/browse/FLINK-17817
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Caizhi Weng
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1826&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=f83cd372-208c-5ec4-12a8-337462457129
> {code}
> 2020-05-19T10:34:18.3224679Z [ERROR] 
> testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase)
>   Time elapsed: 7.537 s  <<< ERROR!
> 2020-05-19T10:34:18.3225273Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2020-05-19T10:34:18.3227634Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:92)
> 2020-05-19T10:34:18.3228518Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:63)
> 2020-05-19T10:34:18.3229170Z  at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.addAll(Iterators.java:361)
> 2020-05-19T10:34:18.3229863Z  at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Lists.newArrayList(Lists.java:160)
> 2020-05-19T10:34:18.3230586Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
> 2020-05-19T10:34:18.3231303Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:141)
> 2020-05-19T10:34:18.3231996Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:107)
> 2020-05-19T10:34:18.3232847Z  at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable(AggregateReduceGroupingITCase.scala:176)
> 2020-05-19T10:34:18.3233694Z  at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg(AggregateReduceGroupingITCase.scala:122)
> 2020-05-19T10:34:18.3234461Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-19T10:34:18.3234983Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-19T10:34:18.3235632Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-19T10:34:18.3236615Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-19T10:34:18.3237256Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-19T10:34:18.3237965Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-19T10:34:18.3238750Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-19T10:34:18.3239314Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-19T10:34:18.3239838Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-19T10:34:18.3240362Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-05-19T10:34:18.3240803Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-19T10:34:18.3243624Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-19T10:34:18.3244531Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-19T10:34:18.3245325Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-19T10:34:18.3246086Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-19T10:34:18.3246765Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-19T10:34:18.3247390Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-19T10:34:18.3248012Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-19T10:34:18.3248779Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-19T10:34:18.3249417Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-19T10:34:18.3250357Z  at 

[jira] [Created] (FLINK-17847) ArrayIndexOutOfBoundsException happened in when codegen StreamExec operator

2020-05-20 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-17847:
--

 Summary: ArrayIndexOutOfBoundsException happened in when codegen 
StreamExec operator
 Key: FLINK-17847
 URL: https://issues.apache.org/jira/browse/FLINK-17847
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.10.0, 1.11.0
Reporter: Leonard Xu
 Fix For: 1.11.0, 1.10.0


user's query:

 
{code:java}
//source table 
create table json_table( 
w_es BIGINT, 
w_type STRING, 
w_isDdl BOOLEAN,
 w_data ARRAY>,
 w_ts TIMESTAMP(3), 
w_table STRING) WITH (
  'connector.type' = 'kafka',
  'connector.version' = '0.10',
  'connector.topic' = 'json-test2',
  'connector.properties.zookeeper.connect' = 'localhost:2181',
  'connector.properties.bootstrap.servers' = 'localhost:9092',
  'connector.properties.group.id' = 'test-jdbc',
  'connector.startup-mode' = 'earliest-offset',
  'format.type' = 'json',
  'format.derive-schema' = 'true'
)
// real data:
{"w_es":1589870637000,"w_type":"INSERT","w_isDdl":false,"w_data":[{"pay_info":"channelId=82&onlineFee=89.0&outTradeNo=0&payId=0&payType=02&rechargeId=4&totalFee=89.0&tradeStatus=success&userId=32590183789575&sign=00","online_fee":"89.0","sign":"00","account_pay_fee":"0.0"}],"w_ts":"2020-05-20T13:58:37.131Z","w_table":"111"}


//query
select w_ts, 'test' as city1_id,  w_data[0].pay_info AS cate3_id,
 w_data as pay_order_id from json_table

{code}
 

~exception:~

 

 
{code:java}
//
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1427848Caused by: 
java.lang.ArrayIndexOutOfBoundsException: 1427848 at 
org.apache.flink.table.runtime.util.SegmentsUtil.getByteMultiSegments(SegmentsUtil.java:598)
 at 
org.apache.flink.table.runtime.util.SegmentsUtil.getByte(SegmentsUtil.java:590) 
at 
org.apache.flink.table.runtime.util.SegmentsUtil.bitGet(SegmentsUtil.java:534) 
at org.apache.flink.table.dataformat.BinaryArray.isNullAt(BinaryArray.java:117) 
at StreamExecCalc$10.processElement(Unknown Source)
{code}
 

 

looks like in the codegen StreamExecCalc$10 operator some operation visit a 
'-1' index which should be wrong, this bug exits both in 1.10 and 1.11

 
{code:java}
public class StreamExecCalc$10 extends 
org.apache.flink.table.runtime.operators.AbstractProcessStreamOperator
implements org.apache.flink.streaming.api.operators.OneInputStreamOperator {

private final Object[] references;

private final org.apache.flink.table.dataformat.BinaryString str$3 = 
org.apache.flink.table.dataformat.BinaryString.fromString("test");

private transient 
org.apache.flink.table.runtime.typeutils.BaseArraySerializer typeSerializer$5;
final org.apache.flink.table.dataformat.BoxedWrapperRow out = new 
org.apache.flink.table.dataformat.BoxedWrapperRow(4);
private final org.apache.flink.streaming.runtime.streamrecord.StreamRecord 
outElement = new 
org.apache.flink.streaming.runtime.streamrecord.StreamRecord(null);

public StreamExecCalc$10(
Object[] references,
org.apache.flink.streaming.runtime.tasks.StreamTask task,
org.apache.flink.streaming.api.graph.StreamConfig config,
org.apache.flink.streaming.api.operators.Output output) throws 
Exception {
this.references = references;
typeSerializer$5 = 
(((org.apache.flink.table.runtime.typeutils.BaseArraySerializer) 
references[0]));
this.setup(task, config, output);
}

@Override
public void open() throws Exception {
super.open();

}

@Override
public void 
processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord 
element) throws Exception {
org.apache.flink.table.dataformat.BaseRow in1 = 
(org.apache.flink.table.dataformat.BaseRow) element.getValue();

org.apache.flink.table.dataformat.SqlTimestamp field$2;
boolean isNull$2;
org.apache.flink.table.dataformat.BaseArray field$4;
boolean isNull$4;
org.apache.flink.table.dataformat.BaseArray field$6;
org.apache.flink.table.dataformat.BinaryString field$8;
boolean isNull$8;
org.apache.flink.table.dataformat.BinaryString result$9;
boolean isNull$9;

isNull$2 = in1.isNullAt(4);
field$2 = null;
if (!isNull$2) {
field$2 = in1.getTimestamp(4, 3);
}

isNull$4 = in1.isNullAt(3);
field$4 = null;
if (!isNull$4) {
field$4 = in1.getArray(3);
}
field$6 = field$4;
if (!isNull$4) {
field$6 = (org.apache.flink.table.dataformat.BaseArray) 
(typeSerializer$5.copy(field$6));
}
out.setHeader(in1.getHeader());
if (isNull$2) {
out.setNullAt(0);
} else {
out.setNonPrimitiveValue(0, field$2);
}

if (false) {
out.setNullAt(1);
} else {
out.setNonPrimitiveValue(1, 
((org.apache.flink.table.data

[GitHub] [flink-statefun] abc863377 commented on pull request #106: [FLINK-16985] Stateful Functions Examples have their specific job names

2020-05-20 Thread GitBox


abc863377 commented on pull request #106:
URL: https://github.com/apache/flink-statefun/pull/106#issuecomment-631562660


   Hi @igalshilman , 
   Thanks for the suggestion. I'll follow your advice and keep Dockerfile 
simple.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




<    1   2   3   4   5   6   >