[jira] [Commented] (FLINK-17510) StreamingKafkaITCase. testKafka timeouts on downloading Kafka

2021-02-25 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290762#comment-17290762
 ] 

Dawid Wysakowicz commented on FLINK-17510:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13730&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529

> StreamingKafkaITCase. testKafka timeouts on downloading Kafka
> -
>
> Key: FLINK-17510
> URL: https://issues.apache.org/jira/browse/FLINK-17510
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Connectors / Kafka, Tests
>Affects Versions: 1.11.3, 1.12.1
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=585&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}
> 2020-05-05T00:06:49.7268716Z [INFO] 
> ---
> 2020-05-05T00:06:49.7268938Z [INFO]  T E S T S
> 2020-05-05T00:06:49.7269282Z [INFO] 
> ---
> 2020-05-05T00:06:50.5336315Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-05-05T00:11:26.8603439Z [ERROR] Tests run: 3, Failures: 0, Errors: 2, 
> Skipped: 0, Time elapsed: 276.323 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-05-05T00:11:26.8604882Z [ERROR] testKafka[1: 
> kafka-version:0.11.0.2](org.apache.flink.tests.util.kafka.StreamingKafkaITCase)
>   Time elapsed: 120.024 s  <<< ERROR!
> 2020-05-05T00:11:26.8605942Z java.io.IOException: Process ([wget, -q, -P, 
> /tmp/junit2815750531595874769/downloads/1290570732, 
> https://archive.apache.org/dist/kafka/0.11.0.2/kafka_2.11-0.11.0.2.tgz]) 
> exceeded timeout (12) or number of retries (3).
> 2020-05-05T00:11:26.8606732Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:132)
> 2020-05-05T00:11:26.8607321Z  at 
> org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:127)
> 2020-05-05T00:11:26.8607826Z  at 
> org.apache.flink.tests.util.cache.LolCache.getOrDownload(LolCache.java:31)
> 2020-05-05T00:11:26.8608343Z  at 
> org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.setupKafkaDist(LocalStandaloneKafkaResource.java:98)
> 2020-05-05T00:11:26.8608892Z  at 
> org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.before(LocalStandaloneKafkaResource.java:92)
> 2020-05-05T00:11:26.8609602Z  at 
> org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46)
> 2020-05-05T00:11:26.8610026Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-05T00:11:26.8610553Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-05T00:11:26.8610958Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-05T00:11:26.8611388Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-05T00:11:26.8612214Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-05T00:11:26.8612706Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-05T00:11:26.8613109Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-05T00:11:26.8613551Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-05T00:11:26.8614019Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-05T00:11:26.8614442Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-05T00:11:26.8614869Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-05-05T00:11:26.8615251Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-05-05T00:11:26.8615654Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-05-05T00:11:26.8616060Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-05T00:11:26.8616465Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-05T00:11:26.8616893Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-05T00:11:26.8617893Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-05T00:11:26.8618490Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-05T00:11:26.8619056Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-05-05T00:11:26.8619589Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-05-05T00:11:26.8620073Z  at 
> org.junit.runners.Suite.r

[jira] [Commented] (FLINK-21456) TableResult#print() should correctly stringify values of TIMESTAMP type in SQL format

2021-02-25 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290764#comment-17290764
 ] 

Timo Walther commented on FLINK-21456:
--

Yes, we can do that for SQL Client. But if we need it for `TableResult#print()` 
anyways we can also unify this to one utility. Because I don't think that CAST 
TO STRING well-defined already for all types? 

> TableResult#print() should correctly stringify values of TIMESTAMP type in 
> SQL format
> -
>
> Key: FLINK-21456
> URL: https://issues.apache.org/jira/browse/FLINK-21456
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Major
>
> Currently {{TableResult#print()} simply use {{Object#toString()}} as the 
> string representation of the fields. This is not SQL compliant, because for 
> TIMESTAMP and TIMESTAMP_LZ, the string representation should be {{2021-02-23 
> 17:30:00}} instead of {{2021-02-23T17:30:00Z}}.
> Note: we may need to update {{PrintUtils#rowToString(Row)}} and also SQL 
> Client which invokes this method. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21500) Failed to upload logs in Azure

2021-02-25 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-21500:


 Summary: Failed to upload logs in Azure
 Key: FLINK-21500
 URL: https://issues.apache.org/jira/browse/FLINK-21500
 Project: Flink
  Issue Type: Bug
  Components: Build System / Azure Pipelines
Affects Versions: 1.13.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13731&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=b6d98a48-853a-5282-189e-65ad32aabf3b

{code}
This job was abandoned. We have detected that logs from the agent may have not 
finished uploading. We have included our in-memory record of all log lines 
uploaded before we lost contact with the agent:
Starting: Upload Logs
==
Task : Publish Pipeline Artifacts
Description  : Publish (upload) a file or directory as a named artifact for the 
current run
Version  : 1.2.3
Author   : Microsoft Corporation
Help : 
https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/publish-pipeline-artifact
==

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] tzulitai opened a new pull request #203: [FLINK-21498] Avoid copying when converting byte[] to ByteString

2021-02-25 Thread GitBox


tzulitai opened a new pull request #203:
URL: https://github.com/apache/flink-statefun/pull/203


   There's a few places where we can be more efficient by avoiding byte array 
copying and we know that it is safe to do so (because we won't be mutating the 
byte array):
   
   - The message payload serializers
   - In `PersistedRemoteFunctionValues`, when attaching state bytes to a 
ToFunction
   - In deserializers used by auto-routable Kafka / Kinesis ingresses.
   
   ---
   
   ## Verifying the change
   
   E2E tests pass



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21498) Avoid copying when converting byte[] to ByteString in StateFun

2021-02-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21498:
---
Labels: pull-request-available  (was: )

> Avoid copying when converting byte[] to ByteString in StateFun
> --
>
> Key: FLINK-21498
> URL: https://issues.apache.org/jira/browse/FLINK-21498
> Project: Flink
>  Issue Type: Task
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>  Labels: pull-request-available
>
> There's a few places in StateFun where we can be more efficient with byte[] 
> to Protobuf ByteString conversions, by just wrapping the byte[] instead of 
> copying, since we know that the byte array can no longer be mutated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #15018: [FLINK-21460][table api] Use Configuration to create TableEnvironment

2021-02-25 Thread GitBox


flinkbot commented on pull request #15018:
URL: https://github.com/apache/flink/pull/15018#issuecomment-785717193


   
   ## CI report:
   
   * a44e1a4752e0b561b37b6403073e161157975a0e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21497) FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail

2021-02-25 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-21497:
--
Fix Version/s: 1.13.0

> FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail
> --
>
> Key: FLINK-21497
> URL: https://issues.apache.org/jira/browse/FLINK-21497
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13722&view=logs&j=a5ef94ef-68c2-57fd-3794-dc108ed1c495&t=9c1ddabe-d186-5a2c-5fcc-f3cafb3ec699
> {code:java}
> 2021-02-24T22:47:55.4844360Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2021-02-24T22:47:55.4847421Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
> 2021-02-24T22:47:55.4848395Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> 2021-02-24T22:47:55.4849262Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSource(FileSourceTextLinesITCase.java:148)
> 2021-02-24T22:47:55.4850030Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover(FileSourceTextLinesITCase.java:108)
> 2021-02-24T22:47:55.4850780Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-02-24T22:47:55.4851322Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-02-24T22:47:55.4858977Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-02-24T22:47:55.4860737Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-02-24T22:47:55.4861855Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-02-24T22:47:55.4862873Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-02-24T22:47:55.4863598Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-02-24T22:47:55.4864289Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-02-24T22:47:55.4864937Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-02-24T22:47:55.4865570Z  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-02-24T22:47:55.4866152Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2021-02-24T22:47:55.4866670Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-02-24T22:47:55.4867172Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-02-24T22:47:55.4867765Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-02-24T22:47:55.4868588Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-02-24T22:47:55.4869683Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-02-24T22:47:55.4886595Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-02-24T22:47:55.4887656Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-02-24T22:47:55.4888451Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-02-24T22:47:55.4889199Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-02-24T22:47:55.4889845Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-02-24T22:47:55.4890447Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-02-24T22:47:55.4891037Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2021-02-24T22:47:55.4891604Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2021-02-24T22:47:55.4892235Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2021-02-24T22:47:55.4892959Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-02-24T22:47:55.4893573Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-02-24T22:47:55.4894216Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-02-24T22:47:55.4894824Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-02-24T22:47:55.4895425Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-02-24T22:47:55.4896027Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2021-02-24T22:47:55.4896638Z  at 
> org.apache.maven.sure

[GitHub] [flink] wuchong commented on a change in pull request #14977: [FLINK-18726][table-planner-blink] Support INSERT INTO specific colum…

2021-02-25 Thread GitBox


wuchong commented on a change in pull request #14977:
URL: https://github.com/apache/flink/pull/14977#discussion_r582641238



##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/batch/table/TableSinkITCase.scala
##
@@ -207,4 +212,336 @@ class TableSinkITCase extends BatchTestBase {
 val expected = List("book,1,12", "book,4,11", "fruit,3,44")
 assertEquals(expected.sorted, result.sorted)
   }
+
+  @Test
+  def testSinkWithPartitionAndComputedColumn(): Unit = {
+tEnv.executeSql(
+  s"""
+ |CREATE TABLE testSink (
+ |  `a` INT,
+ |  `b` AS `a` + 1,
+ |  `c` STRING,
+ |  `d` INT,
+ |  `e` DOUBLE
+ |)
+ |PARTITIONED BY (`c`, `d`)
+ |WITH (
+ |  'connector' = 'values',
+ |  'sink-insert-only' = 'true'
+ |)
+ |""".stripMargin)
+
+registerCollection("MyTable", simpleData2, simpleType2, "x, y", 
nullableOfSimpleData2)
+
+tEnv.executeSql(
+  s"""
+ |INSERT INTO testSink PARTITION(`c`='2021', `d`=1)
+ |SELECT x, sum(y) FROM MyTable GROUP BY x
+ |""".stripMargin).await()
+val expected = List(
+  "1,2021,1,0.1",
+  "2,2021,1,0.4",
+  "3,2021,1,1.0",
+  "4,2021,1,2.2",
+  "5,2021,1,3.9")
+val result = TestValuesTableFactory.getResults("testSink")
+assertEquals(expected.sorted, result.sorted)
+  }
+
+  @Test
+  def testPartialInsert(): Unit = {
+tEnv.executeSql(
+  s"""
+ |CREATE TABLE testSink (
+ |  `a` INT,
+ |  `b` DOUBLE
+ |)
+ |WITH (
+ |  'connector' = 'values',
+ |  'sink-insert-only' = 'true'
+ |)
+ |""".stripMargin)
+
+registerCollection("MyTable", simpleData2, simpleType2, "x, y", 
nullableOfSimpleData2)
+
+tEnv.executeSql(
+  s"""
+ |INSERT INTO testSink (b)
+ |SELECT sum(y) FROM MyTable GROUP BY x
+ |""".stripMargin).await()
+val expected = List(
+  "null,0.1",
+  "null,0.4",
+  "null,1.0",
+  "null,2.2",
+  "null,3.9")
+val result = TestValuesTableFactory.getResults("testSink")
+assertEquals(expected.sorted, result.sorted)
+  }
+
+  @Test
+  def testPartialInsertWithNotNullColumn(): Unit = {
+tEnv.executeSql(
+  s"""
+ |CREATE TABLE testSink (
+ |  `a` INT NOT NULL,
+ |  `b` DOUBLE
+ |)
+ |WITH (
+ |  'connector' = 'values',
+ |  'sink-insert-only' = 'true'
+ |)
+ |""".stripMargin)
+
+registerCollection("MyTable", simpleData2, simpleType2, "x, y", 
nullableOfSimpleData2)
+
+expectedEx.expect(classOf[ValidationException])
+expectedEx.expectMessage("Column 'a' has no default value and does not 
allow NULLs")
+
+tEnv.executeSql(
+  s"""
+ |INSERT INTO testSink (b)
+ |SELECT sum(y) FROM MyTable GROUP BY x
+ |""".stripMargin).await()
+  }
+
+  @Test
+  def testPartialInsertWithPartitionAndComputedColumn(): Unit = {
+tEnv.executeSql(
+  s"""
+ |CREATE TABLE testSink (
+ |  `a` INT,
+ |  `b` AS `a` + 1,
+ |  `c` STRING,
+ |  `d` INT,
+ |  `e` DOUBLE
+ |)
+ |PARTITIONED BY (`c`, `d`)
+ |WITH (
+ |  'connector' = 'values',
+ |  'sink-insert-only' = 'true'
+ |)
+ |""".stripMargin)
+
+registerCollection("MyTable", simpleData2, simpleType2, "x, y", 
nullableOfSimpleData2)
+
+tEnv.executeSql(
+  s"""
+ |INSERT INTO testSink PARTITION(`c`='2021', `d`=1) (e)
+ |SELECT sum(y) FROM MyTable GROUP BY x
+ |""".stripMargin).await()
+val expected = List(
+  "null,2021,1,0.1",
+  "null,2021,1,0.4",
+  "null,2021,1,1.0",
+  "null,2021,1,2.2",
+  "null,2021,1,3.9")
+val result = TestValuesTableFactory.getResults("testSink")
+assertEquals(expected.sorted, result.sorted)
+  }
+
+  @Test
+  def testFullInsertWithPartitionAndComputedColumn(): Unit = {
+tEnv.executeSql(
+  s"""
+ |CREATE TABLE testSink (
+ |  `a` INT,
+ |  `b` AS `a` + 1,
+ |  `c` STRING,
+ |  `d` INT,
+ |  `e` DOUBLE
+ |)
+ |PARTITIONED BY (`c`, `d`)
+ |WITH (
+ |  'connector' = 'values',
+ |  'sink-insert-only' = 'true'
+ |)
+ |""".stripMargin)
+
+registerCollection("MyTable", simpleData2, simpleType2, "x, y", 
nullableOfSimpleData2)
+
+tEnv.executeSql(
+  s"""
+ |INSERT INTO testSink PARTITION(`c`='2021', `d`=1) (a, e)
+ |SELECT x, sum(y) FROM MyTable GROUP BY x
+ |""".stripMargin).await()
+val expected = List(
+  "1,2021,1,0.1",
+  "2,2021,1,0.4",
+  "3,2021,1,1.0",
+  "4,2021,1,2.2",
+  "5,2021,1,3.9")
+val result = TestValuesTableFactory.getResults("testSink")
+assertEquals(expected.sort

[jira] [Commented] (FLINK-21467) Document possible recommended usage of Bounded{One/Multi}Input.endInput and emphasize that they could be called multiple times

2021-02-25 Thread Kezhu Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290781#comment-17290781
 ] 

Kezhu Wang commented on FLINK-21467:


Hi [~pnowojski], I see the possibility. But I think there is little work Flink 
can do to cope with this kind of issues. The checkpoint could be a savepoint 
triggered from user side and the "non deterministic logic" could be a change 
from user(eg. changing of stoppingOffsets in KafkaSource). In this case, after 
resuming from latest checkpoint/savepoint, {{endOfInput}} was run once but it 
is not belong to current run.

I think, maybe, the documentation should focus more on "no guarantee" for these 
methods to commit side effects to external systems.

> Document possible recommended usage of Bounded{One/Multi}Input.endInput and 
> emphasize that they could be called multiple times
> --
>
> Key: FLINK-21467
> URL: https://issues.apache.org/jira/browse/FLINK-21467
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.13.0
>Reporter: Kezhu Wang
>Priority: Major
>
> It is too tempting to use these api, especially {{BoundedOneInput.endInput}}, 
> to commit final result before FLIP-147 delivered. And this will cause 
> re-commit after failover as [~gaoyunhaii] has pointed out in FLINK-21132.
> I have 
> [pointed|https://github.com/apache/iceberg/issues/2033#issuecomment-784153620]
>  this out in 
> [apache/iceberg#2033|https://github.com/apache/iceberg/issues/2033], please 
> correct me if I was wrong.
> cc [~aljoscha] [~pnowojski] [~roman_khachatryan]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21133) FLIP-27 Source does not work with synchronous savepoint

2021-02-25 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290782#comment-17290782
 ] 

Till Rohrmann commented on FLINK-21133:
---

Yes, let's move this discussion to the FLIP-147 discussion thread. Let's use 
this ticket to discuss how to fix the FLIP-27 sources for stop-with-savepoint 
even if it is just a quick fix for the time being.

> FLIP-27 Source does not work with synchronous savepoint
> ---
>
> Key: FLINK-21133
> URL: https://issues.apache.org/jira/browse/FLINK-21133
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, API / DataStream, Runtime / Checkpointing
>Affects Versions: 1.11.3, 1.12.1
>Reporter: Kezhu Wang
>Priority: Critical
> Fix For: 1.11.4, 1.13.0, 1.12.3
>
>
> I have pushed branch 
> [synchronous-savepoint-conflict-with-bounded-end-input-case|https://github.com/kezhuw/flink/commits/synchronous-savepoint-conflict-with-bounded-end-input-case]
>  in my repository. {{SavepointITCase.testStopSavepointWithFlip27Source}} 
> failed due to timeout.
> See also FLINK-21132 and 
> [apache/iceberg#2033|https://github.com/apache/iceberg/issues/2033]..



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-21133) FLIP-27 Source does not work with synchronous savepoint

2021-02-25 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17288467#comment-17288467
 ] 

Till Rohrmann edited comment on FLINK-21133 at 2/25/21, 8:45 AM:
-

+1 for those use cases/semantics summarised by [~trohrmann]. I agree that 3. 
and 4. are also effectively the same. Maybe trying to conclude various loose 
threads that we had here. I see the following, mostly independent, issues:

a) Two phase commit support for 3. and 4. This will be dealt by FLIP-147 
(please check the discussion on the dev mailing list)
b) Unfortunately in FLINK-21132 we broke 3. (*stop-with-savepoint --drain*). In 
this case, `endOfInput()` should be called (CC [~roman_khachatryan]). 
Otherwise, some operators are not flushing/draining the buffered state (like 
for example {{AsyncWaitOperator}}, which is doing it only in the 
{{endOfInput()}} call). Note that before FLINK-21132, 3. was working correctly 
only if we ignore the issue of committing side effects (two phase commit 
support). FLINK-21453 will fix this problem.
c) Changing 2., from "stop with savepoint" to "cancel with savepoint". 
Previously I thought about it as a refactor/clean up AND optimisation (speed up 
of the shutdown). However, as we can not used this approach for 3., I think 
it's just an optimisation that would diverge the code base. For this reason I 
think it would be better to postpone such optimisation after FLIP-147 is done 
(if ever).
d) FLIP-27 not supporting stop with savepoint (both 3. and 4.)


was (Author: pnowojski):
+1 for those use cases/semantics summarised by [~trohrmann]. I agree that 3. 
and 4. are also effectively the same. Maybe trying to conclude various loose 
threads that we had here. I see the following, mostly independent, issues:

a) Two phase commit support for 3. and 4. This will be dealt by FLIP-147 
(please check the discussion on the dev mailing list)
b) Unfortunately in FLINK-21132 we broke 3. (*stop-with-savepoint --drain*). In 
this case, `endOfInput()` should be called (CC [~roman_khachatryan]). 
Otherwise, some operators are not flushing/draining the buffered state (like 
for example {{AsyncWaitOperator}}, which is doing it only in the 
{{endOfInput()}} call). Note that before FLINK-21132, 3. was working correctly 
only if we ignore the issue of committing side effects (two phase commit 
support).
c) Changing 2., from "stop with savepoint" to "cancel with savepoint". 
Previously I thought about it as a refactor/clean up AND optimisation (speed up 
of the shutdown). However, as we can not used this approach for 3., I think 
it's just an optimisation that would diverge the code base. For this reason I 
think it would be better to postpone such optimisation after FLIP-147 is done 
(if ever).
d) FLIP-27 not supporting stop with savepoint (both 3. and 4.)

> FLIP-27 Source does not work with synchronous savepoint
> ---
>
> Key: FLINK-21133
> URL: https://issues.apache.org/jira/browse/FLINK-21133
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, API / DataStream, Runtime / Checkpointing
>Affects Versions: 1.11.3, 1.12.1
>Reporter: Kezhu Wang
>Priority: Critical
> Fix For: 1.11.4, 1.13.0, 1.12.3
>
>
> I have pushed branch 
> [synchronous-savepoint-conflict-with-bounded-end-input-case|https://github.com/kezhuw/flink/commits/synchronous-savepoint-conflict-with-bounded-end-input-case]
>  in my repository. {{SavepointITCase.testStopSavepointWithFlip27Source}} 
> failed due to timeout.
> See also FLINK-21132 and 
> [apache/iceberg#2033|https://github.com/apache/iceberg/issues/2033]..



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14740: [FLINK-21067][runtime][checkpoint] Modify the logic of computing which tasks to trigger/ack/commit to support finished tasks

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14740:
URL: https://github.com/apache/flink/pull/14740#issuecomment-766340750


   
   ## CI report:
   
   * c7e6b28b249f85cf52740d5201a769e0982a60aa UNKNOWN
   * bebd298009b12a9d5ac6518902f5534f8e00ff32 UNKNOWN
   * eb6c10b0d339bfc92a540314e7c58cbf11a70dd9 UNKNOWN
   * d2b929d9a6f8f9ce142d94ef8be40d8e70e289a1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13735)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14981: [FLINK-21343] Update documentation regarding unified savepoint format

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14981:
URL: https://github.com/apache/flink/pull/14981#issuecomment-783266867


   
   ## CI report:
   
   * effa665a8f5549c858354f093a85fdfb1b439a1e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13576)
 
   * 1774e0ac0afb93f529b6831b74f773312685ca6c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15006: [FLINK-21485][sql-client] Simplify the ExecutionContext

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15006:
URL: https://github.com/apache/flink/pull/15006#issuecomment-785114095


   
   ## CI report:
   
   * 4cd5563d44a62aec2b8db80cfe8da650fddf4ffa UNKNOWN
   * 4cc39b300203dd122980e6ef22080746834fcc9b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13736)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15015: [FLINK-21479][coordination] Provide read-only interface of TaskManagerTracker to ResourceAllocationStrategy

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15015:
URL: https://github.com/apache/flink/pull/15015#issuecomment-785556721


   
   ## CI report:
   
   * 122ce5ed1d4f393686491f5be80dacb320020e3e Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13737)
 
   * 69b32e4892cee2e19cd2be8a50f9a791828ec9e7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13741)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15018: [FLINK-21460][table api] Use Configuration to create TableEnvironment

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15018:
URL: https://github.com/apache/flink/pull/15018#issuecomment-785717193


   
   ## CI report:
   
   * a44e1a4752e0b561b37b6403073e161157975a0e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13750)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15013: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes.

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15013:
URL: https://github.com/apache/flink/pull/15013#issuecomment-785429756


   
   ## CI report:
   
   * f2c1726aadbac68116f40e49698b6fa2457fd4e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13726)
 
   * e8d9945955c0730486f3d64da4001b03bfe3be66 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13748)
 
   * e618698ebd320e7c1830b1b2a4c3aa0854ab5112 UNKNOWN
   * 8f3096c94d1de28f9c88803f17455814de3481ba UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21485) Simplify the ExecutionContext

2021-02-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21485:
---
Labels: pull-request-available  (was: )

> Simplify the ExecutionContext
> -
>
> Key: FLINK-21485
> URL: https://issues.apache.org/jira/browse/FLINK-21485
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client
>Affects Versions: 1.13.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Currently \{{ExecutionContext}} is too complicated. In our design, we should 
> treat \{{ExecutionContext}} as a wrapper of \{{TableEnvironment}} and leave 
> the session config to \{{SessionContext}}. However, we still need to store 
> the origin \{{Configuration}} and command line options. Therefore, we 
> continue introduce a new context named \{{DefaultContext}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol opened a new pull request #15019: [DRAFT][FLINK-21400] Store attempt numbers outside ExecutionGraph

2021-02-25 Thread GitBox


zentol opened a new pull request #15019:
URL: https://github.com/apache/flink/pull/15019


   Introduces a data structure to store the attempt numbers outside the 
ExecutionGraph. It's really just a simple Map which 
ties a specific vertex+subtask to an attempt count.
   
   Counts are set when an execution is registered at the EG, and retrieved when 
the ExecutionVertex creates a new Execution. The current attempt count is also 
still stored in the Execution, making the change less invasive (for example, 
resetForNewExecution continues to work without modifications).
   
   
   One thing is that, as is, the semantics when it comes to rescaling are a bit 
funky.
   ScaleUp:
   If you begin with p=1 and an attempt count of 4, and then rescale to p=2, 
then what should the attempt count be for both subtasks?
   In this version the attempt count for subtask 1 would be retained, while 
subtask 2 starts at 0.
   Setting both to 0 would also make sense, but if we downscale again to p=1 
then it would be nice if the attempt count had some relation to the original 
count.
   Alternatively we could try to derive the attempt count for subtask 2 from 
other subtasks; in this example the obvious choice would be 2, because we're 
just replicating subtask 1.
   
   ScaleDown:
   The main issue arises when scaling down where the subtask with the largest 
index has the highest attempt count; currently this count would be lost. So you 
have p=2, and subtask 2 has an attempt count of 4, and now you scale down to 
p=1. The attempt count would now be solely determined by subtask 1, although we 
in essence just merged the two.
   
   
   Overall, I don't think resetting attempt counts to 0 is an option, because 
they can be used to gauge the health of a vertex, and we'd run into collisions 
within metrics if we ever re-use a subtask+attempt combination.
   
   The current approach is by far the simplest, and is the only option iff we 
want to adhere to these rules:
   * every combination of subtask + attempt count is only used once
   * the attempt counts for a given subtask over time always form a continuous 
series starting at 0
   
   But I'm quite interested in what other people think about this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21400) Attempt numbers are not maintained across restarts

2021-02-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21400:
---
Labels: pull-request-available  (was: )

> Attempt numbers are not maintained across restarts
> --
>
> Key: FLINK-21400
> URL: https://issues.apache.org/jira/browse/FLINK-21400
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> The DeclarativeScheduler discards the ExecutionGraph on each restart attempt, 
> as a result of which the attempt number remains 0.
> Various tests use the attempt number to determine whether an exception should 
> be thrown, and thus continue to throw exceptions on each restart.
> Affected tests:
> UnalignedCheckpointTestBase
> UnalignedCheckpointITCase
> ProcessingTimeWindowCheckpointingITCase
> LocalRecoveryITCase
> EventTimeWindowCheckpointingITCase
> EventTimeAllWindowCheckpointingITCase
> FileSinkITBase#testFileSink



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #15019: [DRAFT][FLINK-21400] Store attempt numbers outside ExecutionGraph

2021-02-25 Thread GitBox


flinkbot commented on pull request #15019:
URL: https://github.com/apache/flink/pull/15019#issuecomment-785730574


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 021332c5552103f2389a9fedefe7b327eae163bc (Thu Feb 25 
08:53:57 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21359) CompatibilityResult issues with Flink 1.9.0

2021-02-25 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-21359:

Component/s: (was: Command Line Client)

> CompatibilityResult issues with Flink 1.9.0
> ---
>
> Key: FLINK-21359
> URL: https://issues.apache.org/jira/browse/FLINK-21359
> Project: Flink
>  Issue Type: Bug
> Environment: DEV
>Reporter: Siva
>Priority: Major
>
> I am using emr 5.28.0 and flink 1.9.0
>  
> Source code is working fine with emr 5.11.0 and flink 1.3.2, but the same 
> source code is throwing the following stack track with emr 5.28.0 and flink 
> 1.9.0
>  
> java.lang.NoClassDefFoundError: 
> org/apache/flink/api/common/typeutils/CompatibilityResult
>  at java.lang.Class.getDeclaredMethods0(Native Method)
>  at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>  at java.lang.Class.getDeclaredMethod(Class.java:2128)
>  at java.io.ObjectStreamClass.getPrivateMethod(ObjectStreamClass.java:1643)
>  at java.io.ObjectStreamClass.access$1700(ObjectStreamClass.java:79)
>  at java.io.ObjectStreamClass$3.run(ObjectStreamClass.java:520)
>  at java.io.ObjectStreamClass$3.run(ObjectStreamClass.java:494)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at java.io.ObjectStreamClass.(ObjectStreamClass.java:494)
>  at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:391)
>  at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1134)
>  at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
>  at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
>  at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
>  at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
>  at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
>  at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
>  at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>  at 
> org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:586)
>  at 
> org.apache.flink.util.InstantiationUtil.writeObjectToConfig(InstantiationUtil.java:515)
>  at 
> org.apache.flink.streaming.api.graph.StreamConfig.setTypeSerializer(StreamConfig.java:193)
>  at 
> org.apache.flink.streaming.api.graph.StreamConfig.setTypeSerializerIn1(StreamConfig.java:143)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.setVertexConfig(StreamingJobGraphGenerator.java:438)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createChain(StreamingJobGraphGenerator.java:272)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createChain(StreamingJobGraphGenerator.java:238)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createChain(StreamingJobGraphGenerator.java:238)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createChain(StreamingJobGraphGenerator.java:243)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.setChaining(StreamingJobGraphGenerator.java:207)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:159)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:94)
>  at 
> org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:737)
>  at 
> org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:88)
>  at 
> org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:122)
>  at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:227)
>  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.api.common.typeutils.CompatibilityResult
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>  ... 42 more



--
This message was s

[GitHub] [flink] gaoyunhaii commented on pull request #14740: [FLINK-21067][runtime][checkpoint] Modify the logic of computing which tasks to trigger/ack/commit to support finished tasks

2021-02-25 Thread GitBox


gaoyunhaii commented on pull request #14740:
URL: https://github.com/apache/flink/pull/14740#issuecomment-785731866


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys closed pull request #14981: [FLINK-21343] Update documentation regarding unified savepoint format

2021-02-25 Thread GitBox


dawidwys closed pull request #14981:
URL: https://github.com/apache/flink/pull/14981


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on pull request #14981: [FLINK-21343] Update documentation regarding unified savepoint format

2021-02-25 Thread GitBox


dawidwys commented on pull request #14981:
URL: https://github.com/apache/flink/pull/14981#issuecomment-785732676


   Thanks for the review @sjwiesman !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21359) CompatibilityResult issues with Flink 1.9.0

2021-02-25 Thread Tzu-Li (Gordon) Tai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290789#comment-17290789
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-21359:
-

[~sivaj2ee] please see my questions above.

Again, I need to clarify whether when you say upgrade, do you mean restoring 
from an old savepoint? If yes, which version was that savepoint taken with?

>From the stack trace, this doesn't look like a restore issue, but more of a 
>fact that there is a version mismatch between the version used for compiling 
>the job, and the version run in AWS EMR.

> CompatibilityResult issues with Flink 1.9.0
> ---
>
> Key: FLINK-21359
> URL: https://issues.apache.org/jira/browse/FLINK-21359
> Project: Flink
>  Issue Type: Bug
> Environment: DEV
>Reporter: Siva
>Priority: Major
>
> I am using emr 5.28.0 and flink 1.9.0
>  
> Source code is working fine with emr 5.11.0 and flink 1.3.2, but the same 
> source code is throwing the following stack track with emr 5.28.0 and flink 
> 1.9.0
>  
> java.lang.NoClassDefFoundError: 
> org/apache/flink/api/common/typeutils/CompatibilityResult
>  at java.lang.Class.getDeclaredMethods0(Native Method)
>  at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>  at java.lang.Class.getDeclaredMethod(Class.java:2128)
>  at java.io.ObjectStreamClass.getPrivateMethod(ObjectStreamClass.java:1643)
>  at java.io.ObjectStreamClass.access$1700(ObjectStreamClass.java:79)
>  at java.io.ObjectStreamClass$3.run(ObjectStreamClass.java:520)
>  at java.io.ObjectStreamClass$3.run(ObjectStreamClass.java:494)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at java.io.ObjectStreamClass.(ObjectStreamClass.java:494)
>  at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:391)
>  at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1134)
>  at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
>  at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
>  at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
>  at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
>  at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
>  at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
>  at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>  at 
> org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:586)
>  at 
> org.apache.flink.util.InstantiationUtil.writeObjectToConfig(InstantiationUtil.java:515)
>  at 
> org.apache.flink.streaming.api.graph.StreamConfig.setTypeSerializer(StreamConfig.java:193)
>  at 
> org.apache.flink.streaming.api.graph.StreamConfig.setTypeSerializerIn1(StreamConfig.java:143)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.setVertexConfig(StreamingJobGraphGenerator.java:438)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createChain(StreamingJobGraphGenerator.java:272)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createChain(StreamingJobGraphGenerator.java:238)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createChain(StreamingJobGraphGenerator.java:238)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createChain(StreamingJobGraphGenerator.java:243)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.setChaining(StreamingJobGraphGenerator.java:207)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:159)
>  at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:94)
>  at 
> org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:737)
>  at 
> org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:88)
>  at 
> org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:122)
>  at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:227)
>  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.jav

[jira] [Closed] (FLINK-21343) Update documentation with possible migration strategies

2021-02-25 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-21343.

Resolution: Fixed

Implemented in a3450be32aace36cdacea5ec53b61b76cb2a97e5

> Update documentation with possible migration strategies
> ---
>
> Key: FLINK-21343
> URL: https://issues.apache.org/jira/browse/FLINK-21343
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Runtime / State Backends
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21501) Sync Chinese documentation with FLINK-21343

2021-02-25 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-21501:


 Summary: Sync Chinese documentation with FLINK-21343
 Key: FLINK-21501
 URL: https://issues.apache.org/jira/browse/FLINK-21501
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.13.0
Reporter: Dawid Wysakowicz


We should update the Chinese documentation with changes introduced in 
FLINK-21343



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14740: [FLINK-21067][runtime][checkpoint] Modify the logic of computing which tasks to trigger/ack/commit to support finished tasks

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14740:
URL: https://github.com/apache/flink/pull/14740#issuecomment-766340750


   
   ## CI report:
   
   * c7e6b28b249f85cf52740d5201a769e0982a60aa UNKNOWN
   * bebd298009b12a9d5ac6518902f5534f8e00ff32 UNKNOWN
   * eb6c10b0d339bfc92a540314e7c58cbf11a70dd9 UNKNOWN
   * d2b929d9a6f8f9ce142d94ef8be40d8e70e289a1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13735)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13754)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15013: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes.

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15013:
URL: https://github.com/apache/flink/pull/15013#issuecomment-785429756


   
   ## CI report:
   
   * f2c1726aadbac68116f40e49698b6fa2457fd4e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13726)
 
   * e8d9945955c0730486f3d64da4001b03bfe3be66 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13748)
 
   * e618698ebd320e7c1830b1b2a4c3aa0854ab5112 UNKNOWN
   * 8f3096c94d1de28f9c88803f17455814de3481ba UNKNOWN
   * c96379cbcd18522dd643c5ef0a2150372ce50bb0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sunzheng1002 commented on a change in pull request #14725: [FLINK-20977] Fix use the "USE DATABASE" command bug

2021-02-25 Thread GitBox


sunzheng1002 commented on a change in pull request #14725:
URL: https://github.com/apache/flink/pull/14725#discussion_r582662310



##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/SqlCommandParser.java
##
@@ -256,7 +256,7 @@ private static SqlCommandCall parseBySqlParser(Parser 
sqlParser, String stmt) {
 
 USE_CATALOG,
 
-USE,
+USE("USE\\s+(.*)", SINGLE_OPERAND),

Review comment:
   I have revised it according to your opinion





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15014: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes. [1.12]

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15014:
URL: https://github.com/apache/flink/pull/15014#issuecomment-785429830


   
   ## CI report:
   
   * 076ef8d7f17282ee1be79f57612a4e4c70a472f9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13727)
 
   * 4c79ec889a7f5eb36773adf4e8c61f6108acaa50 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13749)
 
   * d803b470a798c260b5336c315e6db325a2d1b8fe UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15019: [DRAFT][FLINK-21400] Store attempt numbers outside ExecutionGraph

2021-02-25 Thread GitBox


flinkbot commented on pull request #15019:
URL: https://github.com/apache/flink/pull/15019#issuecomment-785740829


   
   ## CI report:
   
   * 021332c5552103f2389a9fedefe7b327eae163bc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-21497) FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail

2021-02-25 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-21497:


Assignee: Chesnay Schepler

> FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail
> --
>
> Key: FLINK-21497
> URL: https://issues.apache.org/jira/browse/FLINK-21497
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13722&view=logs&j=a5ef94ef-68c2-57fd-3794-dc108ed1c495&t=9c1ddabe-d186-5a2c-5fcc-f3cafb3ec699
> {code:java}
> 2021-02-24T22:47:55.4844360Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2021-02-24T22:47:55.4847421Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
> 2021-02-24T22:47:55.4848395Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> 2021-02-24T22:47:55.4849262Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSource(FileSourceTextLinesITCase.java:148)
> 2021-02-24T22:47:55.4850030Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover(FileSourceTextLinesITCase.java:108)
> 2021-02-24T22:47:55.4850780Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-02-24T22:47:55.4851322Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-02-24T22:47:55.4858977Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-02-24T22:47:55.4860737Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-02-24T22:47:55.4861855Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-02-24T22:47:55.4862873Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-02-24T22:47:55.4863598Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-02-24T22:47:55.4864289Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-02-24T22:47:55.4864937Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-02-24T22:47:55.4865570Z  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-02-24T22:47:55.4866152Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2021-02-24T22:47:55.4866670Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-02-24T22:47:55.4867172Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-02-24T22:47:55.4867765Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-02-24T22:47:55.4868588Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-02-24T22:47:55.4869683Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-02-24T22:47:55.4886595Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-02-24T22:47:55.4887656Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-02-24T22:47:55.4888451Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-02-24T22:47:55.4889199Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-02-24T22:47:55.4889845Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-02-24T22:47:55.4890447Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-02-24T22:47:55.4891037Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2021-02-24T22:47:55.4891604Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2021-02-24T22:47:55.4892235Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2021-02-24T22:47:55.4892959Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-02-24T22:47:55.4893573Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-02-24T22:47:55.4894216Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-02-24T22:47:55.4894824Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-02-24T22:47:55.4895425Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-02-24T22:47:55.4896027Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>

[GitHub] [flink] wuchong commented on a change in pull request #15003: [FLINK-21482][table-planner-blink] Support grouping set syntax for WindowAggregate

2021-02-25 Thread GitBox


wuchong commented on a change in pull request #15003:
URL: https://github.com/apache/flink/pull/15003#discussion_r582661033



##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/agg/WindowAggregateTest.scala
##
@@ -683,4 +682,165 @@ class WindowAggregateTest extends TableTestBase {
   "must be an integral multiple of step, but got maxSize 360 ms and 
step 150 ms")
 util.verifyExplain(sql)
   }
+
+  @Test
+  def testCantMergeWindowTVF_GroupingSetsWithoutWindowStartEnd(): Unit = {
+val sql =
+  """
+|SELECT
+|   a,
+|   count(distinct c) AS uv
+|FROM TABLE(TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '15' 
MINUTE))
+|GROUP BY GROUPING SETS ((a), (window_start), (window_end))
+  """.stripMargin
+util.verifyRelPlan(sql)
+  }
+
+  @Test
+  def testCantMergeWindowTVF_GroupingSetsOnlyWithWindowStart(): Unit = {

Review comment:
   ditto.

##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/agg/WindowAggregateTest.scala
##
@@ -683,4 +682,165 @@ class WindowAggregateTest extends TableTestBase {
   "must be an integral multiple of step, but got maxSize 360 ms and 
step 150 ms")
 util.verifyExplain(sql)
   }
+
+  @Test

Review comment:
   Could you add some tests for other grouping sets syntax, including 
`CUBE`, `ROLL`. 

##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/agg/WindowAggregateTest.scala
##
@@ -683,4 +682,165 @@ class WindowAggregateTest extends TableTestBase {
   "must be an integral multiple of step, but got maxSize 360 ms and 
step 150 ms")
 util.verifyExplain(sql)
   }
+
+  @Test
+  def testCantMergeWindowTVF_GroupingSetsWithoutWindowStartEnd(): Unit = {
+val sql =
+  """
+|SELECT
+|   a,
+|   count(distinct c) AS uv
+|FROM TABLE(TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '15' 
MINUTE))
+|GROUP BY GROUPING SETS ((a), (window_start), (window_end))
+  """.stripMargin
+util.verifyRelPlan(sql)
+  }
+
+  @Test
+  def testCantMergeWindowTVF_GroupingSetsOnlyWithWindowStart(): Unit = {
+val sql =
+  """
+|SELECT
+|   a,
+|   count(distinct c) AS uv
+|FROM TABLE(TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '15' 
MINUTE))
+|GROUP BY GROUPING SETS ((a, window_start), (window_start))
+  """.stripMargin
+util.verifyRelPlan(sql)
+  }
+
+  @Test
+  def testTumble_GroupingSets(): Unit = {
+val sql =
+  """
+|SELECT
+|   a,
+|   b,
+|   count(distinct c) AS uv
+|FROM TABLE(TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '15' 
MINUTE))
+|GROUP BY GROUPING SETS ((a, window_start, window_end), (b, 
window_start, window_end))
+  """.stripMargin
+util.verifyRelPlan(sql)
+  }
+
+  @Test
+  def testTumble_GroupingSets1(): Unit = {
+val sql =
+  """
+|SELECT
+|   a,
+|   b,
+|   count(distinct c) AS uv
+|FROM TABLE(TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '15' 
MINUTE))
+|GROUP BY GROUPING SETS ((a), (b)), window_start, window_end
+  """.stripMargin
+util.verifyRelPlan(sql)
+  }
+
+  @Test
+  def testTumble_GroupingSetsDistinctSplitEnabled(): Unit = {
+util.tableEnv.getConfig.getConfiguration.setBoolean(
+  OptimizerConfigOptions.TABLE_OPTIMIZER_DISTINCT_AGG_SPLIT_ENABLED, true)
+val sql =
+  """
+|SELECT
+|   a,
+|   b,
+|   count(*),
+|   sum(d),
+|   max(d) filter (where b > 1000),
+|   count(distinct c) AS uv
+|FROM TABLE(TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '15' 
MINUTE))
+|GROUP BY GROUPING SETS ((a), (b)), window_start, window_end
+  """.stripMargin
+util.verifyRelPlan(sql)
+  }
+
+  @Test
+  def testTumble_GroupingSetsDistinctOnWindowColumns(): Unit = {

Review comment:
   Should rename to `testCantMergeWindowTVF_`

##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/agg/WindowAggregateTest.scala
##
@@ -683,4 +682,165 @@ class WindowAggregateTest extends TableTestBase {
   "must be an integral multiple of step, but got maxSize 360 ms and 
step 150 ms")
 util.verifyExplain(sql)
   }
+
+  @Test
+  def testCantMergeWindowTVF_GroupingSetsWithoutWindowStartEnd(): Unit = {

Review comment:
   Should be `testCantTranslateToWindowAgg_` because the group keys don't 
contain both window_start and window_end, so there is no `WindowAggregate`, but 
`GroupAggregate`. 





This is an automated message from the Apache Git Service.
To res

[GitHub] [flink] wuchong commented on a change in pull request #15003: [FLINK-21482][table-planner-blink] Support grouping set syntax for WindowAggregate

2021-02-25 Thread GitBox


wuchong commented on a change in pull request #15003:
URL: https://github.com/apache/flink/pull/15003#discussion_r582675036



##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/agg/WindowAggregateTest.scala
##
@@ -683,4 +682,165 @@ class WindowAggregateTest extends TableTestBase {
   "must be an integral multiple of step, but got maxSize 360 ms and 
step 150 ms")
 util.verifyExplain(sql)
   }
+
+  @Test

Review comment:
   Could you add some tests for other grouping sets syntax, including 
`CUBE`, `ROLLUP`. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14740: [FLINK-21067][runtime][checkpoint] Modify the logic of computing which tasks to trigger/ack/commit to support finished tasks

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14740:
URL: https://github.com/apache/flink/pull/14740#issuecomment-766340750


   
   ## CI report:
   
   * c7e6b28b249f85cf52740d5201a769e0982a60aa UNKNOWN
   * bebd298009b12a9d5ac6518902f5534f8e00ff32 UNKNOWN
   * eb6c10b0d339bfc92a540314e7c58cbf11a70dd9 UNKNOWN
   * d2b929d9a6f8f9ce142d94ef8be40d8e70e289a1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13754)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13735)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14725: [FLINK-20977] Fix use the "USE DATABASE" command bug

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14725:
URL: https://github.com/apache/flink/pull/14725#issuecomment-765052802


   
   ## CI report:
   
   * c118f4d029f9ee0a195a985b1d808c437d8d753a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12358)
 
   * bbdffa5d674a286da0ea344633015b6a896cc303 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14840: [FLINK-21231][sql-client] add "SHOW VIEWS" to SQL client

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14840:
URL: https://github.com/apache/flink/pull/14840#issuecomment-772110005


   
   ## CI report:
   
   * 144583814392c82fd760bb6252508dba4f78cf50 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13738)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15014: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes. [1.12]

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15014:
URL: https://github.com/apache/flink/pull/15014#issuecomment-785429830


   
   ## CI report:
   
   * 076ef8d7f17282ee1be79f57612a4e4c70a472f9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13727)
 
   * 4c79ec889a7f5eb36773adf4e8c61f6108acaa50 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13749)
 
   * d803b470a798c260b5336c315e6db325a2d1b8fe Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13755)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15019: [DRAFT][FLINK-21400] Store attempt numbers outside ExecutionGraph

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15019:
URL: https://github.com/apache/flink/pull/15019#issuecomment-785740829


   
   ## CI report:
   
   * 021332c5552103f2389a9fedefe7b327eae163bc Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13756)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15013: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes.

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15013:
URL: https://github.com/apache/flink/pull/15013#issuecomment-785429756


   
   ## CI report:
   
   * f2c1726aadbac68116f40e49698b6fa2457fd4e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13726)
 
   * e8d9945955c0730486f3d64da4001b03bfe3be66 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13748)
 
   * e618698ebd320e7c1830b1b2a4c3aa0854ab5112 UNKNOWN
   * 8f3096c94d1de28f9c88803f17455814de3481ba UNKNOWN
   * c96379cbcd18522dd643c5ef0a2150372ce50bb0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13752)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21502) Reduce frequency of global re-allocate resources

2021-02-25 Thread Xintong Song (Jira)
Xintong Song created FLINK-21502:


 Summary: Reduce frequency of global re-allocate resources
 Key: FLINK-21502
 URL: https://issues.apache.org/jira/browse/FLINK-21502
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing
Reporter: Xintong Song






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rkhachatryan commented on a change in pull request #15013: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes.

2021-02-25 Thread GitBox


rkhachatryan commented on a change in pull request #15013:
URL: https://github.com/apache/flink/pull/15013#discussion_r582670718



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGenerator.java
##
@@ -359,6 +366,16 @@ private void setChaining(Map hashes, 
List>
 }
 }
 
+private static int compareHashes(byte[] hash1, byte[] hash2) {
+for (int index = 0; index < hash1.length; index++) {
+int diff = hash2[index] - hash1[index];

Review comment:
   nit: handle arrays of different length

##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java
##
@@ -954,6 +954,41 @@ public void 
testYieldingOperatorProperlyChainedOnNewSources() {
 assertEquals(4, vertices.get(0).getOperatorIDs().size());
 }
 
+@Test
+public void testDeterministicUnionOrder() {
+StreamExecutionEnvironment env = 
StreamExecutionEnvironment.createLocalEnvironment(1);
+
+JobGraph jobGraph = getUnionJobGraph(env);
+JobVertex jobSink = 
Iterables.getLast(jobGraph.getVerticesSortedTopologicallyFromSources());
+List expectedSourceOrder =
+jobSink.getInputs().stream()
+.map(edge -> edge.getSource().getProducer().getName())
+.collect(Collectors.toList());
+
+for (int i = 0; i < 100; i++) {
+JobGraph jobGraph2 = getUnionJobGraph(env);
+JobVertex jobSink2 =
+
Iterables.getLast(jobGraph2.getVerticesSortedTopologicallyFromSources());
+assertNotEquals("Different runs should yield different vertexes", 
jobSink, jobSink2);
+List actualSourceOrder =
+jobSink2.getInputs().stream()
+.map(edge -> 
edge.getSource().getProducer().getName())
+.collect(Collectors.toList());
+assertEquals("Union inputs reordered", actualSourceOrder, 
expectedSourceOrder);

Review comment:
   nit: flip expected | actual

##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGenerator.java
##
@@ -347,7 +348,13 @@ private void setChaining(Map hashes, 
List>
 final Map chainEntryPoints =
 buildChainedInputsAndGetHeadInputs(hashes, legacyHashes);
 final Collection initialEntryPoints =
-new ArrayList<>(chainEntryPoints.values());
+chainEntryPoints.values().stream()
+.sorted(
+Comparator.comparing(
+operatorChainInfo ->
+
hashes.get(operatorChainInfo.getStartNodeId()),
+
StreamingJobGraphGenerator::compareHashes))

Review comment:
   Is it actually possible to have multiple sources with the same 
startNodeId?
   If so, I think hash comparison is not covered by tests (please feel free to 
leave it as is if it would require too much effort).

##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java
##
@@ -954,6 +954,41 @@ public void 
testYieldingOperatorProperlyChainedOnNewSources() {
 assertEquals(4, vertices.get(0).getOperatorIDs().size());
 }
 
+@Test
+public void testDeterministicUnionOrder() {
+StreamExecutionEnvironment env = 
StreamExecutionEnvironment.createLocalEnvironment(1);
+
+JobGraph jobGraph = getUnionJobGraph(env);
+JobVertex jobSink = 
Iterables.getLast(jobGraph.getVerticesSortedTopologicallyFromSources());
+List expectedSourceOrder =
+jobSink.getInputs().stream()
+.map(edge -> edge.getSource().getProducer().getName())
+.collect(Collectors.toList());
+
+for (int i = 0; i < 100; i++) {
+JobGraph jobGraph2 = getUnionJobGraph(env);

Review comment:
   nit: add more than just two sources in `getUnionJobGraph` to increase 
failure probability?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #14725: [FLINK-20977] Fix use the "USE DATABASE" command bug

2021-02-25 Thread GitBox


wuchong commented on a change in pull request #14725:
URL: https://github.com/apache/flink/pull/14725#discussion_r582689059



##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliClientTest.java
##
@@ -258,6 +258,31 @@ public void testUseDatabaseAndShowCurrentDatabase() throws 
Exception {
 assertTrue(output.contains("db"));
 }
 
+@Test
+public void testUseDatabaseAndShowCurrentDatabaseByKeyword() throws 
Exception {
+TestingExecutor executor = new TestingExecutorBuilder()
+.setExecuteSqlConsumer((ignored1, sql) -> {
+if (sql.toLowerCase().equals("use `mod`")) {
+return TestTableResult.TABLE_RESULT_OK;
+} else if (sql.toLowerCase().equals("show current 
database")) {
+SHOW_ROW.setField(0, "`mod`");
+return new 
TestTableResult(ResultKind.SUCCESS_WITH_CONTENT,
+TableSchema.builder().field("current database 
name", DataTypes.STRING()).build(),
+CloseableIterator.ofElement(SHOW_ROW, ele -> 
{}));
+} else {
+throw new SqlExecutionException("unexpected database 
name: db");
+}
+})
+.build();
+String output = testExecuteSql(executor, "use `mod`;");
+assertThat(executor.getNumExecuteSqlCalls(), is(1));
+assertFalse(output.contains("OK"));

Review comment:
   Should be `assertFalse(output.contains("unexpected database name"));`.
   
   This test can pass even if we don't fix `CliClient`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] streaming-olap removed a comment on pull request #8751: [FLINK-11937][StateBackend]Resolve small file problem in RocksDB incremental checkpoint

2021-02-25 Thread GitBox


streaming-olap removed a comment on pull request #8751:
URL: https://github.com/apache/flink/pull/8751#issuecomment-582924362


   > Do you deal with the storage amplification in this pr?
   
   Yes, I introduce this pr into the production in my company, but there is 
some bug which causes the connection leak . If you introduce this pr into th 
production, you should fix the bug.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #14725: [FLINK-20977] Fix use the "USE DATABASE" command bug

2021-02-25 Thread GitBox


wuchong commented on pull request #14725:
URL: https://github.com/apache/flink/pull/14725#issuecomment-785767856


   Please do not use "git merge" to rebase branches, otherwise the changes is 
hard to track. Please use "git rebase" instead. IntelliJ IDEA provide an easy 
tool to do git rebase, you can find the tool via `VCS -> Git -> Rebase`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14725: [FLINK-20977] Fix use the "USE DATABASE" command bug

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14725:
URL: https://github.com/apache/flink/pull/14725#issuecomment-765052802


   
   ## CI report:
   
   * c118f4d029f9ee0a195a985b1d808c437d8d753a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12358)
 
   * bbdffa5d674a286da0ea344633015b6a896cc303 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13757)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas opened a new pull request #15020: [FLINK-21445][kubernetes] Application mode does not set the configuration when building PackagedProgram

2021-02-25 Thread GitBox


SteNicholas opened a new pull request #15020:
URL: https://github.com/apache/flink/pull/15020


   ## What is the purpose of the change
   
   *Application mode uses `ClassPathPackagedProgramRetriever` to create the 
`PackagedProgram` which doesn't set the configuration. This causes some client 
configurations not take effect. `ClassPathPackagedProgramRetriever` should 
refactor to have separate classes for the Python-based PackagedProgram and the 
PackagedProgram with and without a jar.*
   
   ## Brief change log
   
 - *`ClassPathPackagedProgramRetriever` refactors to have separate 
implementations which are `PythonBasedPackagedProgramRetriever`, 
`JarFilePackagedProgramRetriever` and `ScanClassPathPackagedProgramRetriever`, 
which set the configuration to `PackagedProgram`.*
 - *`ApplicationClusterEntryPoint` gets the `PackagedProgramRetriever` with 
setting the configuration.*
   
   ## Verifying this change
   
 - *`ClassPathPackagedProgramRetrieverTest` adds the test case 
`testGetPackagedProgramWithConfiguration` to verify whether get the 
`PackagedProgram` with the specified configuration.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21445) Application mode does not set the configuration when building PackagedProgram

2021-02-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21445:
---
Labels: pull-request-available  (was: )

> Application mode does not set the configuration when building PackagedProgram
> -
>
> Key: FLINK-21445
> URL: https://issues.apache.org/jira/browse/FLINK-21445
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Deployment / Scripts, 
> Deployment / YARN
>Affects Versions: 1.11.3, 1.12.1, 1.13.0
>Reporter: Yang Wang
>Assignee: Nicholas Jiang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.4, 1.13.0, 1.12.3
>
>
> Application mode uses {{ClassPathPackagedProgramRetriever}} to create the 
> {{PackagedProgram}}. However, it does not set the configuration. This will 
> cause some client configurations not take effect. For example, 
> {{classloader.resolve-order}}.
> I think we just forget to do this since we have done the similar thing in 
> {{CliFrontend}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #14996: [FLINK-21435][table] Implement Schema, ResolvedSchema, SchemaResolver

2021-02-25 Thread GitBox


wuchong commented on a change in pull request #14996:
URL: https://github.com/apache/flink/pull/14996#discussion_r582702862



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/Schema.java
##
@@ -0,0 +1,856 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.table.catalog.Column;
+import org.apache.flink.table.catalog.Column.ComputedColumn;
+import org.apache.flink.table.catalog.Column.MetadataColumn;
+import org.apache.flink.table.catalog.Column.PhysicalColumn;
+import org.apache.flink.table.catalog.Constraint;
+import org.apache.flink.table.catalog.ResolvedSchema;
+import org.apache.flink.table.catalog.SchemaResolver;
+import org.apache.flink.table.catalog.UniqueConstraint;
+import org.apache.flink.table.catalog.WatermarkSpec;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.SqlCallExpression;
+import org.apache.flink.table.types.AbstractDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.utils.EncodingUtils;
+import org.apache.flink.util.Preconditions;
+import org.apache.flink.util.StringUtils;
+
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+import static 
org.apache.flink.table.types.logical.utils.LogicalTypeChecks.hasRoot;
+
+/**
+ * Schema of a table or view.
+ *
+ * A schema represents the schema part of a {@code CREATE TABLE (schema) 
WITH (options)} DDL
+ * statement in SQL. It defines columns of different kind, constraints, time 
attributes, and
+ * watermark strategies. It is possible to reference objects (such as 
functions or types) across
+ * different catalogs.
+ *
+ * This class is used in the API and catalogs to define an unresolved 
schema that will be
+ * translated to {@link ResolvedSchema}. Some methods of this class perform 
basic validation,
+ * however, the main validation happens during the resolution.
+ *
+ * Since an instance of this class is unresolved, it should not be directly 
persisted. The {@link
+ * #toString()} shows only a summary of the contained objects.
+ */
+@PublicEvolving
+public final class Schema {
+
+private final List columns;
+
+private final List watermarkSpecs;
+
+private final @Nullable UnresolvedPrimaryKey primaryKey;
+
+private Schema(
+List columns,
+List watermarkSpecs,
+@Nullable UnresolvedPrimaryKey primaryKey) {
+this.columns = columns;
+this.watermarkSpecs = watermarkSpecs;
+this.primaryKey = primaryKey;
+}
+
+/** Builder for configuring and creating instances of {@link Schema}. */
+public static Schema.Builder newBuilder() {
+return new Builder();
+}
+
+public List getColumns() {
+return columns;
+}
+
+public List getWatermarkSpecs() {
+return watermarkSpecs;
+}
+
+public Optional getPrimaryKey() {
+return Optional.ofNullable(primaryKey);
+}
+
+/** Resolves the given {@link Schema} to a validated {@link 
ResolvedSchema}. */
+public ResolvedSchema resolve(SchemaResolver resolver) {
+return resolver.resolve(this);
+}
+
+@Override
+public String toString() {
+final List components = new ArrayList<>();
+components.addAll(columns);
+components.addAll(watermarkSpecs);
+if (primaryKey != null) {
+components.add(primaryKey);
+}
+return components.stream()
+.map(Objects::toString)
+.map(s -> "  " + s)
+.collect(Collectors.joining(", \n", "(\n", "\n)"));
+}
+
+@Override
+public boolean equals(Object o) {
+if (this == o) {
+return true;
+

[GitHub] [flink] rkhachatryan commented on pull request #15007: [FLINK-21453][checkpointing] Do not ignore endOfInput when terminating a job with savepoint (1.12)

2021-02-25 Thread GitBox


rkhachatryan commented on pull request #15007:
URL: https://github.com/apache/flink/pull/15007#issuecomment-785790775


   Build failure is unrelated (FLINK-20329), merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan merged pull request #15012: fixup! [FLINK-21453][checkpointing][refactor] Replace advanceToEndOfTime with new CheckpointType.SAVEPOINT_TERMINATE

2021-02-25 Thread GitBox


rkhachatryan merged pull request #15012:
URL: https://github.com/apache/flink/pull/15012


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan merged pull request #15011: [FLINK-21453][checkpointing] Do not ignore endOfInput when terminating a job with savepoint (1.11)

2021-02-25 Thread GitBox


rkhachatryan merged pull request #15011:
URL: https://github.com/apache/flink/pull/15011


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #14958: [FLINK-18550][sql-client] use TableResult#collect to get select result for sql client

2021-02-25 Thread GitBox


wuchong commented on pull request #14958:
URL: https://github.com/apache/flink/pull/14958#issuecomment-785791462


   Build is passed: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13740&view=results
   Merging...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14958: [FLINK-18550][sql-client] use TableResult#collect to get select result for sql client

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14958:
URL: https://github.com/apache/flink/pull/14958#issuecomment-781106758


   
   ## CI report:
   
   * 9e7bfb51a1084576e38ad700b626bb7e776eab2d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13740)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan merged pull request #15007: [FLINK-21453][checkpointing] Do not ignore endOfInput when terminating a job with savepoint (1.12)

2021-02-25 Thread GitBox


rkhachatryan merged pull request #15007:
URL: https://github.com/apache/flink/pull/15007


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong merged pull request #14958: [FLINK-18550][sql-client] use TableResult#collect to get select result for sql client

2021-02-25 Thread GitBox


wuchong merged pull request #14958:
URL: https://github.com/apache/flink/pull/14958


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15001: [FLINK-20332][runtime][k8s] Add recovered workers to pending resources for native k8s deployment

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15001:
URL: https://github.com/apache/flink/pull/15001#issuecomment-784869362


   
   ## CI report:
   
   * f67e1c0e2148164c4349b821dcf75d80fc76ba46 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13689)
 
   * 64f83133c58e0002f81d58d7aa62e4454034206b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18550) use TableResult#collect to get select result for sql client

2021-02-25 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-18550.
---
Resolution: Fixed

Fixed in master: 7fd4767df028ad75c7641ff63b26701bb199b8d5

> use TableResult#collect to get select result for sql client
> ---
>
> Key: FLINK-18550
> URL: https://issues.apache.org/jira/browse/FLINK-18550
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Currently, sql client has a lot of logic to handle the select result in sql 
> client, which can be simplified through {{TableResult#collect}} method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #15020: [FLINK-21445][kubernetes] Application mode does not set the configuration when building PackagedProgram

2021-02-25 Thread GitBox


flinkbot commented on pull request #15020:
URL: https://github.com/apache/flink/pull/15020#issuecomment-785792112


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 42d551868a43e56292a96ae0af25580c28e1dcc6 (Thu Feb 25 
10:31:59 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-21453) BoundedOneInput.endInput is NOT called when doing stop with savepoint WITH drain

2021-02-25 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan resolved FLINK-21453.
---
Resolution: Fixed

> BoundedOneInput.endInput is NOT called when doing stop with savepoint WITH 
> drain
> 
>
> Key: FLINK-21453
> URL: https://issues.apache.org/jira/browse/FLINK-21453
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Task
>Affects Versions: 1.11.4, 1.12.2, 1.13.0
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.4, 1.12.2, 1.13.0
>
>
> In FLINK-21132 we disable {{endInput}} calls when stopping with savepoint. 
> However as discussed in 
> [FLINK-21133|https://issues.apache.org/jira/browse/FLINK-21133?focusedCommentId=17288467&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17288467],
>  stop with savepoint with drain (*stop-with-savepoint --drain*), should be 
> calling {{endOfInput()}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] alpinegizmo commented on a change in pull request #14932: [FLINK-21369][docs] Document Checkpoint Storage

2021-02-25 Thread GitBox


alpinegizmo commented on a change in pull request #14932:
URL: https://github.com/apache/flink/pull/14932#discussion_r582692404



##
File path: docs/content/docs/dev/datastream/fault-tolerance/checkpointing.md
##
@@ -179,18 +190,19 @@ Some more parameters and/or defaults may be set via 
`conf/flink-conf.yaml` (see
 
 {{< top >}}
 
-
-## Selecting a State Backend
+## Selecting Checkpoint Storage
 
 Flink's [checkpointing mechanism]({{< ref "docs/learn-flink/fault_tolerance" 
>}}) stores consistent snapshots
 of all the state in timers and stateful operators, including connectors, 
windows, and any [user-defined state](state.html).
 Where the checkpoints are stored (e.g., JobManager memory, file system, 
database) depends on the configured
-**State Backend**. 
+**Checkpoint Storage**. 
 
-By default, state is kept in memory in the TaskManagers and checkpoints are 
stored in memory in the JobManager. For proper persistence of large state,
-Flink supports various approaches for storing and checkpointing state in other 
state backends. The choice of state backend can be configured via 
`StreamExecutionEnvironment.setStateBackend(…)`.
+By default, checkpoints are stored in memory in the JobManager. For proper 
persistence of large state,
+Flink supports various approaches for checkpointing state in other locations. 
+The choice of checkpoint storag ecan be configured via 
`StreamExecutionEnvironment.getCheckpointConfig().setCheckpointStorage(…)`.
+It is highly encouraged that checkpoints are stored in a highly-available 
filesystem for most production deployments. 

Review comment:
   I don't see any reason to equivocate here.
   
   ```suggestion
   It is strongly encouraged that checkpoints be stored in a highly-available 
filesystem for production deployments. 
   ```

##
File path: docs/content/docs/dev/datastream/fault-tolerance/checkpointing.md
##
@@ -179,18 +190,19 @@ Some more parameters and/or defaults may be set via 
`conf/flink-conf.yaml` (see
 
 {{< top >}}
 
-
-## Selecting a State Backend
+## Selecting Checkpoint Storage
 
 Flink's [checkpointing mechanism]({{< ref "docs/learn-flink/fault_tolerance" 
>}}) stores consistent snapshots
 of all the state in timers and stateful operators, including connectors, 
windows, and any [user-defined state](state.html).
 Where the checkpoints are stored (e.g., JobManager memory, file system, 
database) depends on the configured
-**State Backend**. 
+**Checkpoint Storage**. 
 
-By default, state is kept in memory in the TaskManagers and checkpoints are 
stored in memory in the JobManager. For proper persistence of large state,
-Flink supports various approaches for storing and checkpointing state in other 
state backends. The choice of state backend can be configured via 
`StreamExecutionEnvironment.setStateBackend(…)`.
+By default, checkpoints are stored in memory in the JobManager. For proper 
persistence of large state,
+Flink supports various approaches for checkpointing state in other locations. 
+The choice of checkpoint storag ecan be configured via 
`StreamExecutionEnvironment.getCheckpointConfig().setCheckpointStorage(…)`.

Review comment:
   ```suggestion
   The choice of checkpoint storage can be configured via 
`StreamExecutionEnvironment.getCheckpointConfig().setCheckpointStorage(…)`.
   ```

##
File path: docs/content/docs/ops/state/checkpoints.md
##
@@ -35,6 +35,64 @@ the same semantics as a failure-free execution.
 See [Checkpointing]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" >}}) for how to enable and
 configure checkpoints for your program.
 
+## Checkpoint Storage
+
+When checkpointing is activated, such state is persisted upon checkpoints to 
guard against data loss and recover consistently.
+Where the state is persisted upon checkpoints depends on the chosen 
**Checkpoint Storage**.

Review comment:
   ```suggestion
   When checkpointing is enabled, managed state is persisted to ensure 
consistent recovery in case of failures.
   Where the state is persisted during checkpointing depends on the chosen 
**Checkpoint Storage**.
   ```

##
File path: docs/content/docs/ops/state/checkpoints.md
##
@@ -35,6 +35,64 @@ the same semantics as a failure-free execution.
 See [Checkpointing]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" >}}) for how to enable and
 configure checkpoints for your program.
 
+## Checkpoint Storage
+
+When checkpointing is activated, such state is persisted upon checkpoints to 
guard against data loss and recover consistently.
+Where the state is persisted upon checkpoints depends on the chosen 
**Checkpoint Storage**.
+
+## Available State Backends
+
+Out of the box, Flink bundles these checkpoint storage types:
+
+ - *JobManagerCheckpointStorage*
+ - *FileSystemCheckpointStorage*
+
+{{< hint info >}}
+If a checkpoint directory is configured `FileSystemCheckpointStorage` will be 
used, otherwise the system will use

[GitHub] [flink] yulei0824 commented on pull request #14840: [FLINK-21231][sql-client] add "SHOW VIEWS" to SQL client

2021-02-25 Thread GitBox


yulei0824 commented on pull request #14840:
URL: https://github.com/apache/flink/pull/14840#issuecomment-785799012


   > Hi @yulei0824 , please do not use "git merge" to rebase branches, 
otherwises it's hard to track the commits. It would be better to use "git 
rebase".
   
   I have resolved the conflicts.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21497) FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail

2021-02-25 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-21497:
-
Parent: (was: FLINK-21075)
Issue Type: Bug  (was: Sub-task)

> FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail
> --
>
> Key: FLINK-21497
> URL: https://issues.apache.org/jira/browse/FLINK-21497
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13722&view=logs&j=a5ef94ef-68c2-57fd-3794-dc108ed1c495&t=9c1ddabe-d186-5a2c-5fcc-f3cafb3ec699
> {code:java}
> 2021-02-24T22:47:55.4844360Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2021-02-24T22:47:55.4847421Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
> 2021-02-24T22:47:55.4848395Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> 2021-02-24T22:47:55.4849262Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSource(FileSourceTextLinesITCase.java:148)
> 2021-02-24T22:47:55.4850030Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover(FileSourceTextLinesITCase.java:108)
> 2021-02-24T22:47:55.4850780Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-02-24T22:47:55.4851322Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-02-24T22:47:55.4858977Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-02-24T22:47:55.4860737Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-02-24T22:47:55.4861855Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-02-24T22:47:55.4862873Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-02-24T22:47:55.4863598Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-02-24T22:47:55.4864289Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-02-24T22:47:55.4864937Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-02-24T22:47:55.4865570Z  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-02-24T22:47:55.4866152Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2021-02-24T22:47:55.4866670Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-02-24T22:47:55.4867172Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-02-24T22:47:55.4867765Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-02-24T22:47:55.4868588Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-02-24T22:47:55.4869683Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-02-24T22:47:55.4886595Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-02-24T22:47:55.4887656Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-02-24T22:47:55.4888451Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-02-24T22:47:55.4889199Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-02-24T22:47:55.4889845Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-02-24T22:47:55.4890447Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-02-24T22:47:55.4891037Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2021-02-24T22:47:55.4891604Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2021-02-24T22:47:55.4892235Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2021-02-24T22:47:55.4892959Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-02-24T22:47:55.4893573Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-02-24T22:47:55.4894216Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-02-24T22:47:55.4894824Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-02-24T22:47:55.4895425Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-02-24T22:47:55.4896027Z  at 
> org.junit.runners.ParentRu

[jira] [Commented] (FLINK-21497) FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail

2021-02-25 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290840#comment-17290840
 ] 

Chesnay Schepler commented on FLINK-21497:
--

The underlying issue also occurs with the default scheduler.

> FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail
> --
>
> Key: FLINK-21497
> URL: https://issues.apache.org/jira/browse/FLINK-21497
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13722&view=logs&j=a5ef94ef-68c2-57fd-3794-dc108ed1c495&t=9c1ddabe-d186-5a2c-5fcc-f3cafb3ec699
> {code:java}
> 2021-02-24T22:47:55.4844360Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2021-02-24T22:47:55.4847421Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
> 2021-02-24T22:47:55.4848395Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> 2021-02-24T22:47:55.4849262Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSource(FileSourceTextLinesITCase.java:148)
> 2021-02-24T22:47:55.4850030Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover(FileSourceTextLinesITCase.java:108)
> 2021-02-24T22:47:55.4850780Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-02-24T22:47:55.4851322Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-02-24T22:47:55.4858977Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-02-24T22:47:55.4860737Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-02-24T22:47:55.4861855Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-02-24T22:47:55.4862873Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-02-24T22:47:55.4863598Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-02-24T22:47:55.4864289Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-02-24T22:47:55.4864937Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-02-24T22:47:55.4865570Z  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-02-24T22:47:55.4866152Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2021-02-24T22:47:55.4866670Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-02-24T22:47:55.4867172Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-02-24T22:47:55.4867765Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-02-24T22:47:55.4868588Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-02-24T22:47:55.4869683Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-02-24T22:47:55.4886595Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-02-24T22:47:55.4887656Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-02-24T22:47:55.4888451Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-02-24T22:47:55.4889199Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-02-24T22:47:55.4889845Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-02-24T22:47:55.4890447Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-02-24T22:47:55.4891037Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2021-02-24T22:47:55.4891604Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2021-02-24T22:47:55.4892235Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2021-02-24T22:47:55.4892959Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-02-24T22:47:55.4893573Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-02-24T22:47:55.4894216Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-02-24T22:47:55.4894824Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-02-24T22:47:55.4895425Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-02-24T22:47:55

[GitHub] [flink] flinkbot edited a comment on pull request #14725: [FLINK-20977] Fix use the "USE DATABASE" command bug

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14725:
URL: https://github.com/apache/flink/pull/14725#issuecomment-765052802


   
   ## CI report:
   
   * bbdffa5d674a286da0ea344633015b6a896cc303 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13757)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14741: [FLINK-21021][python] Bump Beam to 2.27.0

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14741:
URL: https://github.com/apache/flink/pull/14741#issuecomment-766340784


   
   ## CI report:
   
   * c349fa0f9ef5ae136a6dc7616fe0e8e93a2b6133 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13743)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14868: [FLINK-21326][runtime] Optimize building topology when initializing ExecutionGraph

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14868:
URL: https://github.com/apache/flink/pull/14868#issuecomment-773192044


   
   ## CI report:
   
   * e19dc30e8891b756dbdf528f62ac8c77f3a18182 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13739)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14994: [FLINK-21452][connector/common] Stop snapshotting registered readers in source coordinator.

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14994:
URL: https://github.com/apache/flink/pull/14994#issuecomment-784019705


   
   ## CI report:
   
   * 82266e5cbe7a60169f070e232ee10ee57a1d9bd5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13747)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15001: [FLINK-20332][runtime][k8s] Add recovered workers to pending resources for native k8s deployment

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15001:
URL: https://github.com/apache/flink/pull/15001#issuecomment-784869362


   
   ## CI report:
   
   * f67e1c0e2148164c4349b821dcf75d80fc76ba46 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13689)
 
   * 64f83133c58e0002f81d58d7aa62e4454034206b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13758)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15016: [FLINK-21413][state] Clean TtlMapState and TtlListState after all elements are expired

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15016:
URL: https://github.com/apache/flink/pull/15016#issuecomment-785640058


   
   ## CI report:
   
   * 02bd50240636971f1b38f6ca0e2940200de2453a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13742)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15013: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes.

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15013:
URL: https://github.com/apache/flink/pull/15013#issuecomment-785429756


   
   ## CI report:
   
   * e8d9945955c0730486f3d64da4001b03bfe3be66 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13748)
 
   * e618698ebd320e7c1830b1b2a4c3aa0854ab5112 UNKNOWN
   * 8f3096c94d1de28f9c88803f17455814de3481ba UNKNOWN
   * c96379cbcd18522dd643c5ef0a2150372ce50bb0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13752)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15020: [FLINK-21445][kubernetes] Application mode does not set the configuration when building PackagedProgram

2021-02-25 Thread GitBox


flinkbot commented on pull request #15020:
URL: https://github.com/apache/flink/pull/15020#issuecomment-785804760


   
   ## CI report:
   
   * 42d551868a43e56292a96ae0af25580c28e1dcc6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on pull request #15019: [DRAFT][FLINK-21400] Store attempt numbers outside ExecutionGraph

2021-02-25 Thread GitBox


zentol commented on pull request #15019:
URL: https://github.com/apache/flink/pull/15019#issuecomment-785808932


   One issues is that the semantics for the attempt numbers as a whole are not 
really well-defined; internally we use it to differentiate different 
deployments (for example, when we tried to create deterministic 
`ExecutionAttemptIDs`), but users probably think more of it like "how often did 
this subtask fail".
   
   I'm not even sure how valuable the attempt number is as a whole; I'd think 
that users either want to identify instable TaskManagers (where the question is 
"how many failures occur on this TM") or operators ("how often do subtasks of 
this operator fail?"), but the attempt number fulfills neither because it is 
also incremented for subtasks did not actually cause a failure.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on a change in pull request #14996: [FLINK-21435][table] Implement Schema, ResolvedSchema, SchemaResolver

2021-02-25 Thread GitBox


twalthr commented on a change in pull request #14996:
URL: https://github.com/apache/flink/pull/14996#discussion_r582740212



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/Schema.java
##
@@ -0,0 +1,856 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.table.catalog.Column;
+import org.apache.flink.table.catalog.Column.ComputedColumn;
+import org.apache.flink.table.catalog.Column.MetadataColumn;
+import org.apache.flink.table.catalog.Column.PhysicalColumn;
+import org.apache.flink.table.catalog.Constraint;
+import org.apache.flink.table.catalog.ResolvedSchema;
+import org.apache.flink.table.catalog.SchemaResolver;
+import org.apache.flink.table.catalog.UniqueConstraint;
+import org.apache.flink.table.catalog.WatermarkSpec;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.SqlCallExpression;
+import org.apache.flink.table.types.AbstractDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.utils.EncodingUtils;
+import org.apache.flink.util.Preconditions;
+import org.apache.flink.util.StringUtils;
+
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Objects;
+import java.util.Optional;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+import static 
org.apache.flink.table.types.logical.utils.LogicalTypeChecks.hasRoot;
+
+/**
+ * Schema of a table or view.
+ *
+ * A schema represents the schema part of a {@code CREATE TABLE (schema) 
WITH (options)} DDL
+ * statement in SQL. It defines columns of different kind, constraints, time 
attributes, and
+ * watermark strategies. It is possible to reference objects (such as 
functions or types) across
+ * different catalogs.
+ *
+ * This class is used in the API and catalogs to define an unresolved 
schema that will be
+ * translated to {@link ResolvedSchema}. Some methods of this class perform 
basic validation,
+ * however, the main validation happens during the resolution.
+ *
+ * Since an instance of this class is unresolved, it should not be directly 
persisted. The {@link
+ * #toString()} shows only a summary of the contained objects.
+ */
+@PublicEvolving
+public final class Schema {
+
+private final List columns;
+
+private final List watermarkSpecs;
+
+private final @Nullable UnresolvedPrimaryKey primaryKey;
+
+private Schema(
+List columns,
+List watermarkSpecs,
+@Nullable UnresolvedPrimaryKey primaryKey) {
+this.columns = columns;
+this.watermarkSpecs = watermarkSpecs;
+this.primaryKey = primaryKey;
+}
+
+/** Builder for configuring and creating instances of {@link Schema}. */
+public static Schema.Builder newBuilder() {
+return new Builder();
+}
+
+public List getColumns() {
+return columns;
+}
+
+public List getWatermarkSpecs() {
+return watermarkSpecs;
+}
+
+public Optional getPrimaryKey() {
+return Optional.ofNullable(primaryKey);
+}
+
+/** Resolves the given {@link Schema} to a validated {@link 
ResolvedSchema}. */
+public ResolvedSchema resolve(SchemaResolver resolver) {
+return resolver.resolve(this);
+}
+
+@Override
+public String toString() {
+final List components = new ArrayList<>();
+components.addAll(columns);
+components.addAll(watermarkSpecs);
+if (primaryKey != null) {
+components.add(primaryKey);
+}
+return components.stream()
+.map(Objects::toString)
+.map(s -> "  " + s)
+.collect(Collectors.joining(", \n", "(\n", "\n)"));
+}
+
+@Override
+public boolean equals(Object o) {
+if (this == o) {
+return true;
+

[GitHub] [flink] twalthr commented on a change in pull request #14996: [FLINK-21435][table] Implement Schema, ResolvedSchema, SchemaResolver

2021-02-25 Thread GitBox


twalthr commented on a change in pull request #14996:
URL: https://github.com/apache/flink/pull/14996#discussion_r582741567



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/UniqueConstraint.java
##
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.table.utils.EncodingUtils;
+
+import java.util.List;
+import java.util.Objects;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * A unique key constraint. It can be declared also as a PRIMARY KEY.
+ *
+ * @see ConstraintType
+ */
+@PublicEvolving
+public final class UniqueConstraint extends AbstractConstraint {

Review comment:
   Yes, this will happen in the next subtask when the catalog API is 
updated and we introduce utilities for a smooth migration.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15020: [FLINK-21445][kubernetes] Application mode does not set the configuration when building PackagedProgram

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15020:
URL: https://github.com/apache/flink/pull/15020#issuecomment-785804760


   
   ## CI report:
   
   * 42d551868a43e56292a96ae0af25580c28e1dcc6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13762)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15015: [FLINK-21479][coordination] Provide read-only interface of TaskManagerTracker to ResourceAllocationStrategy

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15015:
URL: https://github.com/apache/flink/pull/15015#issuecomment-785556721


   
   ## CI report:
   
   * 69b32e4892cee2e19cd2be8a50f9a791828ec9e7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13741)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] WeiZhong94 commented on a change in pull request #14775: [FLINK-20964][python] Introduce PythonStreamGroupWindowAggregateOperator

2021-02-25 Thread GitBox


WeiZhong94 commented on a change in pull request #14775:
URL: https://github.com/apache/flink/pull/14775#discussion_r582663613



##
File path: flink-python/pyflink/proto/flink-fn-execution.proto
##
@@ -194,8 +194,11 @@ message UserDefinedAggregateFunctions {
   // True if the count(*) agg is inserted by the planner.
   bool count_star_inserted = 11;
 
+  // Whether has Group Window.
+  bool has_group_window = 12;

Review comment:
   We can use `HasField` method in Python to check if the field exists.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on pull request #14988: [FLINK-21442] [connectors/jdbc] Make XaSinkStateSerializer SNAPSHOT state restorable during recovery

2021-02-25 Thread GitBox


rkhachatryan commented on pull request #14988:
URL: https://github.com/apache/flink/pull/14988#issuecomment-785818711


   Merged as `eb0c19dac1644f11430da7b30b4fb9f828be7464`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan closed pull request #14988: [FLINK-21442] [connectors/jdbc] Make XaSinkStateSerializer SNAPSHOT state restorable during recovery

2021-02-25 Thread GitBox


rkhachatryan closed pull request #14988:
URL: https://github.com/apache/flink/pull/14988


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-21442) Jdbc XA sink - restore state serialization error

2021-02-25 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan resolved FLINK-21442.
---
Resolution: Fixed

Merged to master as eb0c19dac1644f11430da7b30b4fb9f828be7464.
Thanks for the fix [~mobuchowski]

> Jdbc XA sink - restore state serialization error
> 
>
> Key: FLINK-21442
> URL: https://issues.apache.org/jira/browse/FLINK-21442
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.13.0
>Reporter: Maciej Obuchowski
>Assignee: Maciej Obuchowski
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> There are state restoration errors connected to XaSinkStateSerializer with 
> it's implementation of SNAPSHOT using anonymous inner class, which is not 
> restorable due to not being public.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys opened a new pull request #15021: [FLINK-21486] Throw exception when restoring Rocks timers with Heap timers enabled

2021-02-25 Thread GitBox


dawidwys opened a new pull request #15021:
URL: https://github.com/apache/flink/pull/15021


   ## What is the purpose of the change
   
   Add a sanity check for restoring a savepoint taken with Rocks timers enabled 
with RocksDB state backend and Heap timers.
   
   ## Brief change log
   
   * Added a flag to RocksDB restore operation if we should throw an exception 
when we encounter a priority queue.
   
   ## Verifying this change
   Added tests in `HeapTimersSnapshottingTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (**yes** / no / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21486) Add sanity check when switching from Rocks to Heap timers

2021-02-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21486:
---
Labels: pull-request-available  (was: )

> Add sanity check when switching from Rocks to Heap timers
> -
>
> Key: FLINK-21486
> URL: https://issues.apache.org/jira/browse/FLINK-21486
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 1.11.3, 1.12.1
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.4, 1.12.3
>
>
> When restoring from a savepoint taken with RocksDB and rocks timers with 
> RocksDB state backend and heap timers we will silently ignore all timers. It 
> might lead to correctness issues. We should throw an exception in such a case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #15021: [FLINK-21486][1.12] Throw exception when restoring Rocks timers with Heap timers enabled

2021-02-25 Thread GitBox


flinkbot commented on pull request #15021:
URL: https://github.com/apache/flink/pull/15021#issuecomment-785830023


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 702d595f78f9d3dd6a0d9d60fd166a2b3c0f5d4c (Thu Feb 25 
11:34:03 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #14840: [FLINK-21231][sql-client] add "SHOW VIEWS" to SQL client

2021-02-25 Thread GitBox


wuchong commented on a change in pull request #14840:
URL: https://github.com/apache/flink/pull/14840#discussion_r582760655



##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliClientTest.java
##
@@ -326,6 +326,26 @@ public void testDropCatalog() throws Exception {
 assertThat(executor.getNumExecuteSqlCalls(), is(1));
 }
 
+@Test
+public void testShowViews() throws Exception {
+TestingExecutor executor =
+new TestingExecutorBuilder()
+.setExecuteSqlConsumer(
+(ignored1, sql) -> {
+SHOW_ROW.setField(0, "v1");

Review comment:
   We should add only condition return "v1" when the input `sql` equals to 
`show views`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise commented on a change in pull request #15013: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes.

2021-02-25 Thread GitBox


AHeise commented on a change in pull request #15013:
URL: https://github.com/apache/flink/pull/15013#discussion_r582760927



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGenerator.java
##
@@ -359,6 +366,16 @@ private void setChaining(Map hashes, 
List>
 }
 }
 
+private static int compareHashes(byte[] hash1, byte[] hash2) {
+for (int index = 0; index < hash1.length; index++) {
+int diff = hash2[index] - hash1[index];

Review comment:
   Hashes are currently always of the same size. However, there is no 
contract on the `StreamGraphHasher` that enforces that, so I'll incorporate 
your idea. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14962: [FLINK-18789][sql-client] Use TableEnvironment#executeSql method to execute insert statement in sql client

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14962:
URL: https://github.com/apache/flink/pull/14962#issuecomment-781306516


   
   ## CI report:
   
   * 5cc8acfd49aa8c77a440d6fdf401a454914c645f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13744)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-13703) AvroTypeInfo requires objects to be strict POJOs (mutable, with setters)

2021-02-25 Thread Patrick Lucas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17290867#comment-17290867
 ] 

Patrick Lucas commented on FLINK-13703:
---

I ran into this as well and was thrown off by the confusing error message—glad 
I found this issue to help explain it, thanks [~afedulov].

I have more experience with protobuf which tends to be immutable-by-default, so 
I set Avro's {{createSetters}} flag when starting out. I'll undo that for now, 
but supporting immutable specific records would still be nice.

> AvroTypeInfo requires objects to be strict POJOs (mutable, with setters)
> 
>
> Key: FLINK-13703
> URL: https://issues.apache.org/jira/browse/FLINK-13703
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Alexander Fedulov
>Priority: Minor
>
> There exists an option to generate Avro sources which would represent 
> immutable objects (`createSetters` option set to false) 
> [\[1\]|https://github.com/commercehub-oss/gradle-avro-plugin] , 
> [\[2\]|https://avro.apache.org/docs/current/api/java/org/apache/avro/mojo/AbstractAvroMojo.html].
>  Those objects still have full arguments constructors and are being correctly 
> dealt with by Avro. 
>  `AvroTypeInfo` in Flink performs a check to verify if a Class complies to 
> the strict POJO requirements (including setters) and throws an 
> IllegalStateException("Expecting type to be a PojoTypeInfo") otherwise. Can 
> this check be relaxed to provide better immutability support?
> +Steps to reproduce:+
> 1) Generate Avro sources from schema using `createSetters` option.
> 2) Use generated class in 
> `ConfluentRegistryAvroDeserializationSchema.forSpecific(GeneratedClass.class, 
> schemaRegistryUrl)`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #14840: [FLINK-21231][sql-client] add "SHOW VIEWS" to SQL client

2021-02-25 Thread GitBox


wuchong commented on a change in pull request #14840:
URL: https://github.com/apache/flink/pull/14840#discussion_r582760655



##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliClientTest.java
##
@@ -326,6 +326,26 @@ public void testDropCatalog() throws Exception {
 assertThat(executor.getNumExecuteSqlCalls(), is(1));
 }
 
+@Test
+public void testShowViews() throws Exception {
+TestingExecutor executor =
+new TestingExecutorBuilder()
+.setExecuteSqlConsumer(
+(ignored1, sql) -> {
+SHOW_ROW.setField(0, "v1");

Review comment:
   We should only return "v1" when the input `sql` equals to `show views`, 
otherwise the test can also pass when the sql is `show tables`. You can see 
other tests, e.g. `testUseDatabaseAndShowCurrentDatabase`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15017: [FLINK-12607][rest] Introduce a REST API that returns the maxParallelism

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15017:
URL: https://github.com/apache/flink/pull/15017#issuecomment-785661508


   
   ## CI report:
   
   * 9a96593aa6cea51f09336a3b284528813d648828 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13745)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise commented on a change in pull request #15013: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes.

2021-02-25 Thread GitBox


AHeise commented on a change in pull request #15013:
URL: https://github.com/apache/flink/pull/15013#discussion_r582764397



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGenerator.java
##
@@ -347,7 +348,13 @@ private void setChaining(Map hashes, 
List>
 final Map chainEntryPoints =
 buildChainedInputsAndGetHeadInputs(hashes, legacyHashes);
 final Collection initialEntryPoints =
-new ArrayList<>(chainEntryPoints.values());
+chainEntryPoints.values().stream()
+.sorted(
+Comparator.comparing(
+operatorChainInfo ->
+
hashes.get(operatorChainInfo.getStartNodeId()),
+
StreamingJobGraphGenerator::compareHashes))

Review comment:
   No, `nodeId` is incremented in the `StreamEnvironment` for each added 
node. So each job vertex has a unique id, which may change on restart (that's 
why the hashes are used). `startNodeId` is the `nodeId` for the particular 
chain.
   
There are also a couple of places where the nodes are indexed by the 
`nodeId`, so duplicates would break much more.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on a change in pull request #15013: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes.

2021-02-25 Thread GitBox


rkhachatryan commented on a change in pull request #15013:
URL: https://github.com/apache/flink/pull/15013#discussion_r582769787



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGenerator.java
##
@@ -347,7 +348,13 @@ private void setChaining(Map hashes, 
List>
 final Map chainEntryPoints =
 buildChainedInputsAndGetHeadInputs(hashes, legacyHashes);
 final Collection initialEntryPoints =
-new ArrayList<>(chainEntryPoints.values());
+chainEntryPoints.values().stream()
+.sorted(
+Comparator.comparing(
+operatorChainInfo ->
+
hashes.get(operatorChainInfo.getStartNodeId()),
+
StreamingJobGraphGenerator::compareHashes))

Review comment:
   Then shouldn't hashes be compared first (before nodeId)?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys opened a new pull request #15022: [FLINK-21486][1.11] Throw exception when restoring Rocks timers with Heap t…

2021-02-25 Thread GitBox


dawidwys opened a new pull request #15022:
URL: https://github.com/apache/flink/pull/15022


   Backport of #15021 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15022: [FLINK-21486][1.11] Throw exception when restoring Rocks timers with Heap t…

2021-02-25 Thread GitBox


flinkbot commented on pull request #15022:
URL: https://github.com/apache/flink/pull/15022#issuecomment-785839793


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 98ff9ab5015946d00952b26238f7aaee36b0dc54 (Thu Feb 25 
11:52:30 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14982: [FLINK-XXXXX] Enfore common/unified savepoint format at operator level

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #14982:
URL: https://github.com/apache/flink/pull/14982#issuecomment-783278231


   
   ## CI report:
   
   * 99de033ba3ac125e343b1983698a8337bed717cf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13710)
 
   * 56576b43861e2f6cd41a26a0d4c0705e46e7f011 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15013: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes.

2021-02-25 Thread GitBox


flinkbot edited a comment on pull request #15013:
URL: https://github.com/apache/flink/pull/15013#issuecomment-785429756


   
   ## CI report:
   
   * e8d9945955c0730486f3d64da4001b03bfe3be66 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13748)
 
   * e618698ebd320e7c1830b1b2a4c3aa0854ab5112 UNKNOWN
   * 8f3096c94d1de28f9c88803f17455814de3481ba UNKNOWN
   * c96379cbcd18522dd643c5ef0a2150372ce50bb0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13752)
 
   * 989137b5809d66cdd9f4c9e5e8915fda7eda549b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15021: [FLINK-21486][1.12] Throw exception when restoring Rocks timers with Heap timers enabled

2021-02-25 Thread GitBox


flinkbot commented on pull request #15021:
URL: https://github.com/apache/flink/pull/15021#issuecomment-785840924


   
   ## CI report:
   
   * 702d595f78f9d3dd6a0d9d60fd166a2b3c0f5d4c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on a change in pull request #15013: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes.

2021-02-25 Thread GitBox


rkhachatryan commented on a change in pull request #15013:
URL: https://github.com/apache/flink/pull/15013#discussion_r582769787



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGenerator.java
##
@@ -347,7 +348,13 @@ private void setChaining(Map hashes, 
List>
 final Map chainEntryPoints =
 buildChainedInputsAndGetHeadInputs(hashes, legacyHashes);
 final Collection initialEntryPoints =
-new ArrayList<>(chainEntryPoints.values());
+chainEntryPoints.values().stream()
+.sorted(
+Comparator.comparing(
+operatorChainInfo ->
+
hashes.get(operatorChainInfo.getStartNodeId()),
+
StreamingJobGraphGenerator::compareHashes))

Review comment:
   Then shouldn't hashes be compared first (before nodeId)?
   
   Edit:
   1. Then why do we need to compare nodeIds at all?
   2. I'm not sure that node ids are re-generated upon restart unless the 
topology has changed. I think that the generated graph is re-used.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   >