[jira] [Created] (FLINK-24584) ClientTest.testRequestUnavailableHost fails on azure

2021-10-18 Thread Xintong Song (Jira)
Xintong Song created FLINK-24584:


 Summary: ClientTest.testRequestUnavailableHost fails on azure
 Key: FLINK-24584
 URL: https://issues.apache.org/jira/browse/FLINK-24584
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Queryable State
Affects Versions: 1.14.0
Reporter: Xintong Song
 Fix For: 1.14.1


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25194=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=f53023d8-92c3-5d78-ec7e-70c2bf37be20=15189

{code}
Oct 19 00:31:59 [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 1.597 s <<< FAILURE! - in 
org.apache.flink.queryablestate.network.ClientTest
Oct 19 00:31:59 [ERROR] testRequestUnavailableHost  Time elapsed: 0.025 s  <<< 
FAILURE!
Oct 19 00:31:59 java.lang.AssertionError: Did not throw expected 
ConnectException
Oct 19 00:31:59 at org.junit.Assert.fail(Assert.java:89)
Oct 19 00:31:59 at 
org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost(ClientTest.java:274)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23391) KafkaSourceReaderTest.testKafkaSourceMetrics fails on azure

2021-10-18 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430321#comment-17430321
 ] 

Xintong Song commented on FLINK-23391:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25194=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=8338a7d2-16f7-52e5-f576-4b7b3071eb3d=7015

> KafkaSourceReaderTest.testKafkaSourceMetrics fails on azure
> ---
>
> Key: FLINK-23391
> URL: https://issues.apache.org/jira/browse/FLINK-23391
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.13.4
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20456=logs=c5612577-f1f7-5977-6ff6-7432788526f7=53f6305f-55e6-561c-8f1e-3a1dde2c77df=6783
> {code}
> Jul 14 23:00:26 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 99.93 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest
> Jul 14 23:00:26 [ERROR] 
> testKafkaSourceMetrics(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 60.225 s  <<< ERROR!
> Jul 14 23:00:26 java.util.concurrent.TimeoutException: Offsets are not 
> committed successfully. Dangling offsets: 
> {15213={KafkaSourceReaderTest-0=OffsetAndMetadata{offset=10, 
> leaderEpoch=null, metadata=''}}}
> Jul 14 23:00:26   at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
> Jul 14 23:00:26   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testKafkaSourceMetrics(KafkaSourceReaderTest.java:275)
> Jul 14 23:00:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 14 23:00:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 14 23:00:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 14 23:00:26   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 14 23:00:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 14 23:00:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 14 23:00:26   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
> Jul 14 23:00:26   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jul 14 23:00:26   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jul 14 23:00:26   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jul 14 23:00:26   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jul 14 23:00:26   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jul 14 23:00:26   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Jul 14 23:00:26   at org.junit.runners.Suite.runChild(Suite.java:128)
> Jul 14 23:00:26   at org.junit.runners.Suite.runChild(Suite.java:27)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jul 14 23:00:26   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Jul 14 23:00:26

[GitHub] [flink] flinkbot edited a comment on pull request #17514: [FLINK-24581][Tests]compatible uname command with multiple OS on ARM …

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17514:
URL: https://github.com/apache/flink/pull/17514#issuecomment-946287305


   
   ## CI report:
   
   * 3b1673ece4fb4b3071b603a20ad1cc0a38d11d61 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25195)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17464: [FLINK-24413] Casting to a CHAR() and VARCHAR() doesn't trim the stri…

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17464:
URL: https://github.com/apache/flink/pull/17464#issuecomment-942309521


   
   ## CI report:
   
   * c27c0bbf796c7d6e92ddef1c893af8d35a18de54 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25010)
 
   * 3c5954956dfd8e9580b0a1b0a4f0d99173fc9869 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25197)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17464: [FLINK-24413] Casting to a CHAR() and VARCHAR() doesn't trim the stri…

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17464:
URL: https://github.com/apache/flink/pull/17464#issuecomment-942309521


   
   ## CI report:
   
   * c27c0bbf796c7d6e92ddef1c893af8d35a18de54 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25010)
 
   * 3c5954956dfd8e9580b0a1b0a4f0d99173fc9869 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17500: [FLINK-24564][formats] Change the default compression to snappy for parquet, orc, avro in table

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17500:
URL: https://github.com/apache/flink/pull/17500#issuecomment-944278328


   
   ## CI report:
   
   * f326685880c91dfc439431829658b0eeda7a7bd4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25153)
 
   * 0a53a4e8ac2ded109be29135846f25a1a251631c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25196)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-24583) ElasticsearchWriterITCase.testWriteOnBulkIntervalFlush timeout on azure

2021-10-18 Thread Xintong Song (Jira)
Xintong Song created FLINK-24583:


 Summary: ElasticsearchWriterITCase.testWriteOnBulkIntervalFlush 
timeout on azure
 Key: FLINK-24583
 URL: https://issues.apache.org/jira/browse/FLINK-24583
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Affects Versions: 1.15.0
 Environment: ElasticsearchWriterITCase
Reporter: Xintong Song
 Fix For: 1.15.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25191=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=f53023d8-92c3-5d78-ec7e-70c2bf37be20=12452

{code}
Oct 18 23:47:27 [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 22.228 s <<< FAILURE! - in 
org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriterITCase
Oct 18 23:47:27 [ERROR] testWriteOnBulkIntervalFlush  Time elapsed: 2.032 s  
<<< ERROR!
Oct 18 23:47:27 java.util.concurrent.TimeoutException: Condition was not met in 
given timeout.
Oct 18 23:47:27 at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:166)
Oct 18 23:47:27 at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:144)
Oct 18 23:47:27 at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:136)
Oct 18 23:47:27 at 
org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriterITCase.testWriteOnBulkIntervalFlush(ElasticsearchWriterITCase.java:139)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24583) ElasticsearchWriterITCase.testWriteOnBulkIntervalFlush timeout on azure

2021-10-18 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-24583:
-
Environment: (was: ElasticsearchWriterITCase)

> ElasticsearchWriterITCase.testWriteOnBulkIntervalFlush timeout on azure
> ---
>
> Key: FLINK-24583
> URL: https://issues.apache.org/jira/browse/FLINK-24583
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25191=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=f53023d8-92c3-5d78-ec7e-70c2bf37be20=12452
> {code}
> Oct 18 23:47:27 [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 22.228 s <<< FAILURE! - in 
> org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriterITCase
> Oct 18 23:47:27 [ERROR] testWriteOnBulkIntervalFlush  Time elapsed: 2.032 s  
> <<< ERROR!
> Oct 18 23:47:27 java.util.concurrent.TimeoutException: Condition was not met 
> in given timeout.
> Oct 18 23:47:27   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:166)
> Oct 18 23:47:27   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:144)
> Oct 18 23:47:27   at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:136)
> Oct 18 23:47:27   at 
> org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriterITCase.testWriteOnBulkIntervalFlush(ElasticsearchWriterITCase.java:139)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17500: [FLINK-24564][formats] Change the default compression to snappy for parquet, orc, avro in table

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17500:
URL: https://github.com/apache/flink/pull/17500#issuecomment-944278328


   
   ## CI report:
   
   * f326685880c91dfc439431829658b0eeda7a7bd4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25153)
 
   * 0a53a4e8ac2ded109be29135846f25a1a251631c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on pull request #17500: [FLINK-24564][formats] Change the default compression to snappy for parquet, orc, avro in table

2021-10-18 Thread GitBox


SteNicholas commented on pull request #17500:
URL: https://github.com/apache/flink/pull/17500#issuecomment-946324910


   @JingsongLi , sorry for missing the update of `ParquetFileSystemITCase`. I 
have fixed the `testNonPartition` test case of `ParquetFileSystemITCase`. 
Please help to review again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24582) PrometheusReporterEndToEndITCase fails due to timeout

2021-10-18 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-24582:
-
Fix Version/s: 1.15.0

> PrometheusReporterEndToEndITCase fails due to timeout
> -
>
> Key: FLINK-24582
> URL: https://issues.apache.org/jira/browse/FLINK-24582
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.12.5, 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25180=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=070ff179-953e-5bda-71fa-d6599415701c=23451
> {code}
> Oct 18 18:17:44 [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, 
> Time elapsed: 9.138 s <<< FAILURE! - in 
> org.apache.flink.metrics.prometheus.tests.PrometheusReporterEndToEndITCase
> Oct 18 18:17:44 [ERROR] testReporter[0: Jar in 'lib', instantiated via 
> reflection]  Time elapsed: 1.779 s  <<< ERROR!
> Oct 18 18:17:44 java.io.IOException: Process ([wget, -q, -P, 
> /home/vsts/work/1/e2e_cache/downloads/-1930370649, --timeout, 240, 
> https://github.com/prometheus/prometheus/releases/download/v2.4.3/prometheus-2.4.3.linux-amd64.tar.gz])
>  exceeded timeout (60) or number of retries (3).
> Oct 18 18:17:44   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:168)
> Oct 18 18:17:44   at 
> org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:135)
> Oct 18 18:17:44   at 
> org.apache.flink.tests.util.cache.PersistingDownloadCache.getOrDownload(PersistingDownloadCache.java:36)
> Oct 18 18:17:44   at 
> org.apache.flink.metrics.prometheus.tests.PrometheusReporterEndToEndITCase.testReporter(PrometheusReporterEndToEndITCase.java:209)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24582) PrometheusReporterEndToEndITCase fails due to timeout

2021-10-18 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-24582:
-
Affects Version/s: 1.12.5

> PrometheusReporterEndToEndITCase fails due to timeout
> -
>
> Key: FLINK-24582
> URL: https://issues.apache.org/jira/browse/FLINK-24582
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.12.5, 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25180=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=070ff179-953e-5bda-71fa-d6599415701c=23451
> {code}
> Oct 18 18:17:44 [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, 
> Time elapsed: 9.138 s <<< FAILURE! - in 
> org.apache.flink.metrics.prometheus.tests.PrometheusReporterEndToEndITCase
> Oct 18 18:17:44 [ERROR] testReporter[0: Jar in 'lib', instantiated via 
> reflection]  Time elapsed: 1.779 s  <<< ERROR!
> Oct 18 18:17:44 java.io.IOException: Process ([wget, -q, -P, 
> /home/vsts/work/1/e2e_cache/downloads/-1930370649, --timeout, 240, 
> https://github.com/prometheus/prometheus/releases/download/v2.4.3/prometheus-2.4.3.linux-amd64.tar.gz])
>  exceeded timeout (60) or number of retries (3).
> Oct 18 18:17:44   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:168)
> Oct 18 18:17:44   at 
> org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:135)
> Oct 18 18:17:44   at 
> org.apache.flink.tests.util.cache.PersistingDownloadCache.getOrDownload(PersistingDownloadCache.java:36)
> Oct 18 18:17:44   at 
> org.apache.flink.metrics.prometheus.tests.PrometheusReporterEndToEndITCase.testReporter(PrometheusReporterEndToEndITCase.java:209)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24582) PrometheusReporterEndToEndITCase fails due to timeout

2021-10-18 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430290#comment-17430290
 ] 

Xintong Song commented on FLINK-24582:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25185=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=25423

> PrometheusReporterEndToEndITCase fails due to timeout
> -
>
> Key: FLINK-24582
> URL: https://issues.apache.org/jira/browse/FLINK-24582
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25180=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=070ff179-953e-5bda-71fa-d6599415701c=23451
> {code}
> Oct 18 18:17:44 [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, 
> Time elapsed: 9.138 s <<< FAILURE! - in 
> org.apache.flink.metrics.prometheus.tests.PrometheusReporterEndToEndITCase
> Oct 18 18:17:44 [ERROR] testReporter[0: Jar in 'lib', instantiated via 
> reflection]  Time elapsed: 1.779 s  <<< ERROR!
> Oct 18 18:17:44 java.io.IOException: Process ([wget, -q, -P, 
> /home/vsts/work/1/e2e_cache/downloads/-1930370649, --timeout, 240, 
> https://github.com/prometheus/prometheus/releases/download/v2.4.3/prometheus-2.4.3.linux-amd64.tar.gz])
>  exceeded timeout (60) or number of retries (3).
> Oct 18 18:17:44   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:168)
> Oct 18 18:17:44   at 
> org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:135)
> Oct 18 18:17:44   at 
> org.apache.flink.tests.util.cache.PersistingDownloadCache.getOrDownload(PersistingDownloadCache.java:36)
> Oct 18 18:17:44   at 
> org.apache.flink.metrics.prometheus.tests.PrometheusReporterEndToEndITCase.testReporter(PrometheusReporterEndToEndITCase.java:209)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wujinhu commented on pull request #7798: [FLINK-11388][fs] Add Aliyun OSS recoverable writer

2021-10-18 Thread GitBox


wujinhu commented on pull request #7798:
URL: https://github.com/apache/flink/pull/7798#issuecomment-946322975


   @kl0u Can we reopen this PR? Some users depend on this feature and I can 
keep working on it. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-24582) PrometheusReporterEndToEndITCase fails due to timeout

2021-10-18 Thread Xintong Song (Jira)
Xintong Song created FLINK-24582:


 Summary: PrometheusReporterEndToEndITCase fails due to timeout
 Key: FLINK-24582
 URL: https://issues.apache.org/jira/browse/FLINK-24582
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Metrics
Affects Versions: 1.15.0
Reporter: Xintong Song


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25180=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=070ff179-953e-5bda-71fa-d6599415701c=23451

{code}
Oct 18 18:17:44 [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time 
elapsed: 9.138 s <<< FAILURE! - in 
org.apache.flink.metrics.prometheus.tests.PrometheusReporterEndToEndITCase
Oct 18 18:17:44 [ERROR] testReporter[0: Jar in 'lib', instantiated via 
reflection]  Time elapsed: 1.779 s  <<< ERROR!
Oct 18 18:17:44 java.io.IOException: Process ([wget, -q, -P, 
/home/vsts/work/1/e2e_cache/downloads/-1930370649, --timeout, 240, 
https://github.com/prometheus/prometheus/releases/download/v2.4.3/prometheus-2.4.3.linux-amd64.tar.gz])
 exceeded timeout (60) or number of retries (3).
Oct 18 18:17:44 at 
org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:168)
Oct 18 18:17:44 at 
org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:135)
Oct 18 18:17:44 at 
org.apache.flink.tests.util.cache.PersistingDownloadCache.getOrDownload(PersistingDownloadCache.java:36)
Oct 18 18:17:44 at 
org.apache.flink.metrics.prometheus.tests.PrometheusReporterEndToEndITCase.testReporter(PrometheusReporterEndToEndITCase.java:209)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-web] leonardBang commented on a change in pull request #473: Add Flink 1.13.3 release

2021-10-18 Thread GitBox


leonardBang commented on a change in pull request #473:
URL: https://github.com/apache/flink-web/pull/473#discussion_r731454050



##
File path: _posts/2021-10-16-release-1.13.3.md
##
@@ -0,0 +1,68 @@
+---
+layout: post
+title:  "Apache Flink 1.13.3 Released"
+date:   2021-10-16 00:00:00
+categories: news
+authors:
+- chesnay:
+  name: "Chesnay Schepler"
+
+---
+
+The Apache Flink community released the third bugfix version of the Apache 
Flink 1.13 series.
+
+This release includes 136 fixes and minor improvements for Flink 1.13.3. The 
list below includes bugfixes and improvements. For a complete list of all 
changes see:
+[JIRA](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350329).
+
+We highly recommend all users to upgrade to Flink 1.13.3.
+
+Updated Maven dependencies:
+
+```xml
+
+  org.apache.flink
+  flink-java
+  1.13.3
+
+
+  org.apache.flink
+  flink-streaming-java_2.11
+  1.13.3
+
+
+  org.apache.flink
+  flink-clients_2.11
+  1.13.3
+
+```
+
+You can find the binaries on the updated [Downloads page]({{ site.baseurl 
}}/downloads.html).
+
+Below you can find more information on changes that might affect the behavior 
of Flink:

Review comment:
   It makes sense to me




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23944) PulsarSourceITCase.testTaskManagerFailure is instable

2021-10-18 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430288#comment-17430288
 ] 

Xintong Song commented on FLINK-23944:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25137=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=24971

> PulsarSourceITCase.testTaskManagerFailure is instable
> -
>
> Key: FLINK-23944
> URL: https://issues.apache.org/jira/browse/FLINK-23944
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.14.0
>Reporter: Dian Fu
>Assignee: Yufan Sheng
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.14.1
>
>
> [https://dev.azure.com/dianfu/Flink/_build/results?buildId=430=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d]
> It's from my personal azure pipeline, however, I'm pretty sure that I have 
> not touched any code related to this. 
> {code:java}
> Aug 24 10:44:13 [ERROR] testTaskManagerFailure{TestEnvironment, 
> ExternalContext, ClusterControllable}[1] Time elapsed: 258.397 s <<< FAILURE! 
> Aug 24 10:44:13 java.lang.AssertionError: Aug 24 10:44:13 Aug 24 10:44:13 
> Expected: Records consumed by Flink should be identical to test data and 
> preserve the order in split Aug 24 10:44:13 but: Mismatched record at 
> position 7: Expected '0W6SzacX7MNL4xLL3BZ8C3ljho4iCydbvxIl' but was 
> 'wVi5JaJpNvgkDEOBRC775qHgw0LyRW2HBxwLmfONeEmr' Aug 24 10:44:13 at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) Aug 24 10:44:13 
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) Aug 24 
> 10:44:13 at 
> org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testTaskManagerFailure(SourceTestSuiteBase.java:271)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24401) TM cannot exit after Metaspace OOM

2021-10-18 Thread future (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430283#comment-17430283
 ] 

future commented on FLINK-24401:


Hi [~pnowojski] , thanks for you reply. I'm sorry, I didn't save the stack 
trace of TM, I just save the log. 

> TM cannot exit after Metaspace OOM
> --
>
> Key: FLINK-24401
> URL: https://issues.apache.org/jira/browse/FLINK-24401
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Runtime / Task
>Affects Versions: 1.12.0, 1.13.0
>Reporter: future
>Priority: Major
> Fix For: 1.14.1, 1.13.4
>
> Attachments: image-2021-09-29-12-00-28-510.png, 
> image-2021-09-29-12-00-44-812.png
>
>
> Hi masters, from the code and log, we can see that OOM will terminateJVM 
> directly, but Metaspace OutOfMemoryError will graceful shutdown. The code 
> comment mentions: {{_it does not usually require more class loading to fail 
> again with the Metaspace OutOfMemoryError_.}}.
> But we encountered: after Metaspace OutOfMemoryError, 
> {{_java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner$Result_.}}, makes Tm 
> unable to exit, keeps trying again, keeps NoClassDefFoundError, keeps class 
> loading failure, until kill tm by manually.
> I want to add a catch Throwable in the onFatalError method, and directly 
> terminateJVM() in the catch. Is there any problem with this strategy? 
>  
> [code link 
> |https://github.com/apache/flink/blob/4fe9f525a92319acc1e3434bebed601306f7a16f/flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskManagerRunner.java#L312]
> picture:
>  
> !image-2021-09-29-12-00-44-812.png|width=1337,height=692!
>   !image-2021-09-29-12-00-28-510.png!
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-24564) Change the default compression to snappy for parquet, orc, avro in table

2021-10-18 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-24564:


Assignee: Nicholas Jiang

> Change the default compression to snappy for parquet, orc, avro in table
> 
>
> Key: FLINK-24564
> URL: https://issues.apache.org/jira/browse/FLINK-24564
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Runtime
>Reporter: Jingsong Lee
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> According to the experience of other frameworks, snappy compression is 
> recommended by default, which will reduce the file size.
> This does not affect reading, because these formats will automatically 
> uncompress the file according to the head information of the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong commented on a change in pull request #17496: [FLINK-24562][yarn] YarnResourceManagerDriverTest should not use ContainerStatusPBImpl.newInstance

2021-10-18 Thread GitBox


xintongsong commented on a change in pull request #17496:
URL: https://github.com/apache/flink/pull/17496#discussion_r731439124



##
File path: 
flink-yarn/src/test/java/org/apache/flink/yarn/TestingContainerStatus.java
##
@@ -21,16 +21,23 @@
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.ContainerState;
 import org.apache.hadoop.yarn.api.records.ContainerStatus;
-import org.apache.hadoop.yarn.api.records.impl.pb.ContainerStatusPBImpl;
 
 /** A {@link ContainerStatus} implementation for testing. */
-class TestingContainerStatus extends ContainerStatusPBImpl {
+class TestingContainerStatus extends ContainerStatus {
 
 private final ContainerId containerId;
 private final ContainerState containerState;
 private final String diagnostics;
 private final int exitStatus;
 
+public static ContainerStatus newInstance(
+ContainerId containerId,
+ContainerState containerState,
+String diagnostics,
+int exitStatus) {
+return new TestingContainerStatus(containerId, containerState, 
diagnostics, exitStatus);
+}

Review comment:
   Why do we need this factory method while we already have a constructor 
with the exactly same arguments?

##
File path: 
flink-yarn/src/test/java/org/apache/flink/yarn/TestingContainerStatus.java
##
@@ -21,16 +21,23 @@
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.ContainerState;
 import org.apache.hadoop.yarn.api.records.ContainerStatus;
-import org.apache.hadoop.yarn.api.records.impl.pb.ContainerStatusPBImpl;
 
 /** A {@link ContainerStatus} implementation for testing. */
-class TestingContainerStatus extends ContainerStatusPBImpl {
+class TestingContainerStatus extends ContainerStatus {

Review comment:
   I'm not sure about this change.
   
   The reason we extends a concrete implementation rather than the abstract 
class here is that, the abstract methods need to be implemented for the 
abstract class could be different in different hadoop versions. Extending a 
concrete implementation makes sure all abstract methods required by the hadoop 
version it compiles with are implemented.
   
   This is probably not a problem for `ContainerStatus`, but we have 
encountered problems in other hadoop classes, which we workaround in this way. 
It might be nice to align the ways we handle these hadoop classes.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24581) compatible uname command with multiple OS on ARM platform

2021-10-18 Thread zhao bo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430274#comment-17430274
 ] 

zhao bo commented on FLINK-24581:
-

working on it

> compatible uname command with multiple OS on ARM platform
> -
>
> Key: FLINK-24581
> URL: https://issues.apache.org/jira/browse/FLINK-24581
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: zhao bo
>Priority: Minor
>  Labels: pull-request-available
>
> Currently there are multiple ARM chips providers, and the associated OS might 
> be linux or Mac OS.
> So the issue will hit on Mac OS which locate on ARM platfrom. We will get 
> option illegal error during `uname -i`
> For generic linux OS on ARM, that would be good when we exec `uname -i`.
>  
> For the reason above, we need to find a approciated way to make both better.
> From try on linux and mac os on ARM, both of them return the same via `uname 
> -m`
>  
> So we will propose a nit fix towards this which is about exchanging the 
> affected part from `uname -i` to `uname -m`.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17514: [FLINK-24581][Tests]compatible uname command with multiple OS on ARM …

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17514:
URL: https://github.com/apache/flink/pull/17514#issuecomment-946287305


   
   ## CI report:
   
   * 3b1673ece4fb4b3071b603a20ad1cc0a38d11d61 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25195)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17514: [FLINK-24581][Tests]compatible uname command with multiple OS on ARM …

2021-10-18 Thread GitBox


flinkbot commented on pull request #17514:
URL: https://github.com/apache/flink/pull/17514#issuecomment-946287673


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3b1673ece4fb4b3071b603a20ad1cc0a38d11d61 (Tue Oct 19 
01:16:02 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-24581).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17514: [FLINK-24581][Tests]compatible uname command with multiple OS on ARM …

2021-10-18 Thread GitBox


flinkbot commented on pull request #17514:
URL: https://github.com/apache/flink/pull/17514#issuecomment-946287305


   
   ## CI report:
   
   * 3b1673ece4fb4b3071b603a20ad1cc0a38d11d61 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24581) compatible uname command with multiple OS on ARM platform

2021-10-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24581:
---
Labels: pull-request-available  (was: )

> compatible uname command with multiple OS on ARM platform
> -
>
> Key: FLINK-24581
> URL: https://issues.apache.org/jira/browse/FLINK-24581
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: zhao bo
>Priority: Minor
>  Labels: pull-request-available
>
> Currently there are multiple ARM chips providers, and the associated OS might 
> be linux or Mac OS.
> So the issue will hit on Mac OS which locate on ARM platfrom. We will get 
> option illegal error during `uname -i`
> For generic linux OS on ARM, that would be good when we exec `uname -i`.
>  
> For the reason above, we need to find a approciated way to make both better.
> From try on linux and mac os on ARM, both of them return the same via `uname 
> -m`
>  
> So we will propose a nit fix towards this which is about exchanging the 
> affected part from `uname -i` to `uname -m`.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bzhaoopenstack opened a new pull request #17514: [FLINK-24581][Tests]compatible uname command with multiple OS on ARM …

2021-10-18 Thread GitBox


bzhaoopenstack opened a new pull request #17514:
URL: https://github.com/apache/flink/pull/17514


   …platfrom
   
   exchange `uname -i` to `uname -m` for mac OS, for avoiding problem when we 
test on ARM
   
   Affected JIRA: FLINK-24581
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-24581) compatible uname command with multiple OS on ARM platform

2021-10-18 Thread zhao bo (Jira)
zhao bo created FLINK-24581:
---

 Summary: compatible uname command with multiple OS on ARM platform
 Key: FLINK-24581
 URL: https://issues.apache.org/jira/browse/FLINK-24581
 Project: Flink
  Issue Type: Bug
  Components: Tests
Reporter: zhao bo


Currently there are multiple ARM chips providers, and the associated OS might 
be linux or Mac OS.

So the issue will hit on Mac OS which locate on ARM platfrom. We will get 
option illegal error during `uname -i`

For generic linux OS on ARM, that would be good when we exec `uname -i`.

 

For the reason above, we need to find a approciated way to make both better.

>From try on linux and mac os on ARM, both of them return the same via `uname 
>-m`

 

So we will propose a nit fix towards this which is about exchanging the 
affected part from `uname -i` to `uname -m`.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24580) Kinesis connect time out error is not handled as recoverable

2021-10-18 Thread John Karp (Jira)
John Karp created FLINK-24580:
-

 Summary: Kinesis connect time out error is not handled as 
recoverable
 Key: FLINK-24580
 URL: https://issues.apache.org/jira/browse/FLINK-24580
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis
Affects Versions: 1.13.2
Reporter: John Karp


Several times a day, transient Kinesis errors cause our Flink job to fail:
{noformat}
org.apache.flink.kinesis.shaded.com.amazonaws.SdkClientException: Unable to 
execute HTTP request: Connect to kinesis.us-east-1.amazonaws.com:443 
[kinesis.us-east-1.amazonaws.com/3.91.171.253] failed: connect timed out
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1153)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2893)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2860)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2849)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.executeGetRecords(AmazonKinesisClient.java:1319)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.getRecords(AmazonKinesisClient.java:1288)
at 
org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxy.getRecords(KinesisProxy.java:292)
at 
org.apache.flink.streaming.connectors.kinesis.internals.publisher.polling.PollingRecordPublisher.getRecords(PollingRecordPublisher.java:168)
at 
org.apache.flink.streaming.connectors.kinesis.internals.publisher.polling.PollingRecordPublisher.run(PollingRecordPublisher.java:113)
at 
org.apache.flink.streaming.connectors.kinesis.internals.publisher.polling.PollingRecordPublisher.run(PollingRecordPublisher.java:102)
at 
org.apache.flink.streaming.connectors.kinesis.internals.ShardConsumer.run(ShardConsumer.java:114)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: 
org.apache.flink.kinesis.shaded.org.apache.http.conn.ConnectTimeoutException: 
Connect to kinesis.us-east-1.amazonaws.com:443 
[kinesis.us-east-1.amazonaws.com/3.91.171.253] failed: connect timed out
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:374)
at jdk.internal.reflect.GeneratedMethodAccessor168.invoke(Unknown 
Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
at 
org.apache.flink.kinesis.shaded.com.amazonaws.http.conn.$Proxy47.connect(Unknown
 Source)
at 
org.apache.flink.kinesis.shaded.org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
at 

[GitHub] [flink] flinkbot edited a comment on pull request #17440: [FLINK-24468][runtime] Wait for the channel activation before creating partition request client

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17440:
URL: https://github.com/apache/flink/pull/17440#issuecomment-938816662


   
   ## CI report:
   
   * 16b451eba79ef7015e19adf6f0eb7c8b39c70715 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25189)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17513: [FLINK-24579][table-api][sql-client] Support toString for arrays as c…

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17513:
URL: https://github.com/apache/flink/pull/17513#issuecomment-946022176


   
   ## CI report:
   
   * 7ed6fd32b50df2743eb82c9fcc7a0edefc8c11ee Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25190)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24516) Modernize Maven Archetype

2021-10-18 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430205#comment-17430205
 ] 

Martijn Visser commented on FLINK-24516:


Definitely we should add a Table API quickstart, but I think it makes sense to 
create a follow-up ticket for that one?

> Modernize Maven Archetype
> -
>
> Key: FLINK-24516
> URL: https://issues.apache.org/jira/browse/FLINK-24516
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
>
> The maven archetypes used by many to start their first Flink application do 
> not reflect the project's current state. 
> Issues:
>  * They still bundle the DataSet API and recommend it for batch processing
>  * The JavaDoc recommends deprecated APIs
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24549) FlinkKinesisConsumer does not work with generic types disabled

2021-10-18 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430203#comment-17430203
 ] 

Martijn Visser commented on FLINK-24549:


Would be good if you can pick it up [~dannycranmer]. Thanks!

> FlinkKinesisConsumer does not work with generic types disabled
> --
>
> Key: FLINK-24549
> URL: https://issues.apache.org/jira/browse/FLINK-24549
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.14.0, 1.12.5, 1.13.2
>Reporter: Dawid Wysakowicz
>Priority: Major
>
> FlinkKinesisConsumer uses {{GenericTypeInfo}} internally, which makes it 
> impossible to disable generic types in the entire job.
> {code}
> java.lang.UnsupportedOperationException: Generic types have been disabled in 
> the ExecutionConfig and type 
> org.apache.flink.streaming.connectors.kinesis.model.SequenceNumber is treated 
> as a generic type.
> at 
> org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:87)
> at 
> org.apache.flink.api.java.typeutils.TupleTypeInfo.createSerializer(TupleTypeInfo.java:104)
> at 
> org.apache.flink.api.java.typeutils.TupleTypeInfo.createSerializer(TupleTypeInfo.java:49)
> at 
> org.apache.flink.api.java.typeutils.ListTypeInfo.createSerializer(ListTypeInfo.java:99)
> at 
> org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:302)
> at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.getListState(DefaultOperatorStateBackend.java:264)
> at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.getUnionListState(DefaultOperatorStateBackend.java:216)
> at 
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer.initializeState(FlinkKinesisConsumer.java:443)
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:189)
> at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:171)
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
> at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:111)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:290)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:425)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:535)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:525)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:565)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Reported in the ML: 
> https://lists.apache.org/thread.html/r6e7723a9d1d77e223fbab481c9a53cbd4a2189ee7442302ee3c33b95%40%3Cuser.flink.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17510: [FLINK-24355][runtime] Expose the flag for enabling checkpoints after tasks finish in the Web UI

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17510:
URL: https://github.com/apache/flink/pull/17510#issuecomment-945683858


   
   ## CI report:
   
   * d2d050bbae90c773386dcfaf77d3b794f8398599 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25187)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17441: [FLINK-24461][table] PrintUtils common now accepts only internal data types

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17441:
URL: https://github.com/apache/flink/pull/17441#issuecomment-938905036


   
   ## CI report:
   
   * e78df5a2a0572a2cfe043bf7737f1e52fe9f89af Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25188)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23486) Add metrics for the ChangelogStateBackend

2021-10-18 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430159#comment-17430159
 ] 

Roman Khachatryan commented on FLINK-23486:
---

Sorry [~complone], I don't fully understand your comment.

Could you please clarify what do you mean by: "pre-action of the buried point", 
"write the checkpoint log" in the context of metrics, "bury these monitoring 
indicators"?

 

I think we can continue the discussion in FLINK-24402 if it's only about 
monitoring back-pressure.

> Add metrics for the ChangelogStateBackend
> -
>
> Key: FLINK-23486
> URL: https://issues.apache.org/jira/browse/FLINK-23486
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics, Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> [https://docs.google.com/document/d/1k5WkWIYzs3n3GYQC76H9BLGxvN3wuq7qUHJuBPR9YX0/edit?usp=sharing]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17512: [FLINK-24563][table-planner][test]

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17512:
URL: https://github.com/apache/flink/pull/17512#issuecomment-945871287


   
   ## CI report:
   
   * 596c788e5cc83b2580eb6d338706db135b4ffb1f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25179)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17439: [FLINK-23271][table-planner] Disallow cast from decimal numerics to boolean

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17439:
URL: https://github.com/apache/flink/pull/17439#issuecomment-938696330


   
   ## CI report:
   
   * 57a13401fc3f49a09bc7b5770f1e7aa212a1b6f8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25174)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17513: [FLINK-24579][table-api][sql-client] Support toString for arrays as c…

2021-10-18 Thread GitBox


flinkbot commented on pull request #17513:
URL: https://github.com/apache/flink/pull/17513#issuecomment-946023939


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7ed6fd32b50df2743eb82c9fcc7a0edefc8c11ee (Mon Oct 18 
18:00:50 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-24579).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17513: [FLINK-24579][table-api][sql-client] Support toString for arrays as c…

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17513:
URL: https://github.com/apache/flink/pull/17513#issuecomment-946022176


   
   ## CI report:
   
   * 7ed6fd32b50df2743eb82c9fcc7a0edefc8c11ee Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25190)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17513: [FLINK-24579][table-api][sql-client] Support toString for arrays as c…

2021-10-18 Thread GitBox


flinkbot commented on pull request #17513:
URL: https://github.com/apache/flink/pull/17513#issuecomment-946022176


   
   ## CI report:
   
   * 7ed6fd32b50df2743eb82c9fcc7a0edefc8c11ee UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24579) FlinkSQL does not print correctly content of arrays nested into maps

2021-10-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24579:
---
Labels: pull-request-available  (was: )

> FlinkSQL does not print correctly content of arrays nested into maps
> 
>
> Key: FLINK-24579
> URL: https://issues.apache.org/jira/browse/FLINK-24579
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.14.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> Some samples of queries to reproduce
> {code:sql}
> select map[array[2], 1];
> select map[1, array[2]];
> select map[array['q'], array[2]];
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] snuyanzin opened a new pull request #17513: [FLINK-24579][table-api][sql-client] Support toString for arrays as c…

2021-10-18 Thread GitBox


snuyanzin opened a new pull request #17513:
URL: https://github.com/apache/flink/pull/17513


   ## What is the purpose of the change
   
   The PRs makes pretty printing of arrays nested into maps .
   
   
   ## Brief change log
   
 - *PrintUtils#deepToString* supporting maps
 - *Correction of found misprints*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   There has been added *PrintUtilsTest#testNestedMapAndArraysToString* which 
highlights the issue and which could be used to check the fix.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
**don't know**)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17179: [FLINK-24086][checkpoint] Rebuilding the SharedStateRegistry only when the restore method is called for the first time.

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17179:
URL: https://github.com/apache/flink/pull/17179#issuecomment-914169535


   
   ## CI report:
   
   * 04ae4ec0488291130ddd425c7c71080af698d58a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25173)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] gaoyunhaii commented on a change in pull request #11: [FLINK-5][iteration] Add DraftExecutionEnvironment to support wrapping operators during compile time

2021-10-18 Thread GitBox


gaoyunhaii commented on a change in pull request #11:
URL: https://github.com/apache/flink-ml/pull/11#discussion_r731119681



##
File path: 
flink-ml-iteration/src/main/java/org/apache/flink/ml/iteration/compile/translator/BroadcastStateTransformationTranslator.java
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.iteration.compile.translator;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.dag.Transformation;
+import org.apache.flink.ml.iteration.compile.DraftTransformationTranslator;
+import org.apache.flink.ml.iteration.operator.OperatorWrapper;
+import org.apache.flink.ml.iteration.operator.WrapperOperatorFactory;
+import org.apache.flink.streaming.api.operators.SimpleOperatorFactory;
+import 
org.apache.flink.streaming.api.operators.co.CoBroadcastWithNonKeyedOperator;
+import 
org.apache.flink.streaming.api.transformations.BroadcastStateTransformation;
+import org.apache.flink.streaming.api.transformations.TwoInputTransformation;
+
+/** Draft translator for the {@link BroadcastStateTransformation}. */
+public class BroadcastStateTransformationTranslator

Review comment:
   I'm a bit hesitate here since the draft transformation translator is not 
bounded to the iteration. For example, it is also used in the implementation of 
`withBroadcast`. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17439: [FLINK-23271][table-planner] Disallow cast from decimal numerics to boolean

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17439:
URL: https://github.com/apache/flink/pull/17439#issuecomment-938696330


   
   ## CI report:
   
   * 5dc78cd2cb8ec6dc971f05c75cbc9b33001e1fe5 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25171)
 
   * 57a13401fc3f49a09bc7b5770f1e7aa212a1b6f8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25174)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17501: [Draft][FLINK-21406][RecordFormat] build AvroParquetRecordFormat for the new FileSource

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17501:
URL: https://github.com/apache/flink/pull/17501#issuecomment-944297212


   
   ## CI report:
   
   * 92b897b4f14d61bf0414fe0d0f95771bdf5f05b5 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25170)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on a change in pull request #17481: [FLINK-24387][table-planner] Support JSON_STRING.

2021-10-18 Thread GitBox


twalthr commented on a change in pull request #17481:
URL: https://github.com/apache/flink/pull/17481#discussion_r731112769



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/converter/DirectConvertRule.java
##
@@ -210,6 +210,10 @@
 DEFINITION_OPERATOR_MAP.put(
 BuiltInFunctionDefinitions.STREAM_RECORD_TIMESTAMP,
 FlinkSqlOperatorTable.STREAMRECORD_TIMESTAMP);
+
+// JSON
+DEFINITION_OPERATOR_MAP.put(

Review comment:
   in theory, we could use bridging function instead (for the planning) but 
with code gen for runtime implementation. is there a reason why we should have 
a Calcite function here?

##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/sql/FlinkSqlOperatorTable.java
##
@@ -1145,6 +1145,16 @@ public boolean isDeterministic() {
 public static final SqlFunction JSON_EXISTS = 
SqlStdOperatorTable.JSON_EXISTS;
 public static final SqlFunction JSON_VALUE = 
SqlStdOperatorTable.JSON_VALUE;
 public static final SqlFunction JSON_QUERY = 
SqlStdOperatorTable.JSON_QUERY;
+public static final SqlFunction JSON_STRING =
+new SqlFunction(
+"JSON_STRING",
+SqlKind.OTHER_FUNCTION,
+ReturnTypes.cascade(
+ReturnTypes.explicit(SqlTypeName.VARCHAR),

Review comment:
   side comment: let's overwrite the `VARCHAR(2000)` for the other JSON 
functions, it will cause issues in the future otherwise. we are discussing 
having it more strict in the future. see FLINK-24413.

##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/sql/FlinkSqlOperatorTable.java
##
@@ -1145,6 +1145,16 @@ public boolean isDeterministic() {
 public static final SqlFunction JSON_EXISTS = 
SqlStdOperatorTable.JSON_EXISTS;
 public static final SqlFunction JSON_VALUE = 
SqlStdOperatorTable.JSON_VALUE;
 public static final SqlFunction JSON_QUERY = 
SqlStdOperatorTable.JSON_QUERY;
+public static final SqlFunction JSON_STRING =
+new SqlFunction(
+"JSON_STRING",
+SqlKind.OTHER_FUNCTION,
+ReturnTypes.cascade(
+ReturnTypes.explicit(SqlTypeName.VARCHAR),
+FlinkReturnTypes.TO_NULLABLE_SHALLOW),

Review comment:
   This is known behavior. The inner nullability depends on the outer 
nullability.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17440: [FLINK-24468][runtime] Wait for the channel activation before creating partition request client

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17440:
URL: https://github.com/apache/flink/pull/17440#issuecomment-938816662


   
   ## CI report:
   
   * 7d56eb7d002b7bf5c268b98e12a6e4879cd68216 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25042)
 
   * 16b451eba79ef7015e19adf6f0eb7c8b39c70715 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25189)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] gaoyunhaii commented on a change in pull request #8: [FLINK-4][iteration] Add operator wrapper for all-round iterations.

2021-10-18 Thread GitBox


gaoyunhaii commented on a change in pull request #8:
URL: https://github.com/apache/flink-ml/pull/8#discussion_r731110199



##
File path: 
flink-ml-iteration/src/main/java/org/apache/flink/ml/iteration/operator/AbstractWrapperOperator.java
##
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.iteration.operator;
+
+import org.apache.flink.ml.iteration.IterationRecord;
+import org.apache.flink.ml.iteration.broadcast.BroadcastOutput;
+import org.apache.flink.ml.iteration.broadcast.BroadcastOutputFactory;
+import org.apache.flink.ml.iteration.progresstrack.ProgressTracker;
+import org.apache.flink.ml.iteration.progresstrack.ProgressTrackerFactory;
+import org.apache.flink.ml.iteration.progresstrack.ProgressTrackerListener;
+import org.apache.flink.ml.iteration.proxy.ProxyOutput;
+import org.apache.flink.runtime.execution.Environment;
+import org.apache.flink.runtime.metrics.groups.OperatorMetricGroup;
+import org.apache.flink.runtime.metrics.groups.UnregisteredMetricGroups;
+import org.apache.flink.streaming.api.graph.StreamConfig;
+import org.apache.flink.streaming.api.operators.BoundedOneInput;
+import org.apache.flink.streaming.api.operators.Output;
+import org.apache.flink.streaming.api.operators.StreamOperator;
+import org.apache.flink.streaming.api.operators.StreamOperatorFactory;
+import org.apache.flink.streaming.api.operators.StreamOperatorParameters;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.streaming.runtime.tasks.StreamTask;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Objects;
+import java.util.function.Supplier;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+/** The base class of all the wrapper operators. It provides the alignment 
functionality. */
+public abstract class AbstractWrapperOperator
+implements StreamOperator>, 
ProgressTrackerListener, BoundedOneInput {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(AbstractWrapperOperator.class);
+
+protected final StreamOperatorParameters> parameters;
+
+protected final StreamConfig streamConfig;
+
+protected final StreamTask containingTask;
+
+protected final Output>> output;
+
+protected final StreamOperatorFactory operatorFactory;
+
+// --- proxy ---
+
+protected final ProxyOutput proxyOutput;
+
+protected final EpochWatermarkSupplier epochWatermarkSupplier;
+
+// --- Metrics ---
+
+/** Metric group for the operator. */
+protected final OperatorMetricGroup metrics;
+
+// - Iteration Related 
+
+protected final ProgressTracker progressTracker;
+
+protected final BroadcastOutput> eventBroadcastOutput;
+
+public AbstractWrapperOperator(
+StreamOperatorParameters> parameters,
+StreamOperatorFactory operatorFactory) {
+this.parameters = Objects.requireNonNull(parameters);
+this.streamConfig = 
Objects.requireNonNull(parameters.getStreamConfig());
+this.containingTask = 
Objects.requireNonNull(parameters.getContainingTask());
+this.output = Objects.requireNonNull(parameters.getOutput());
+this.operatorFactory = Objects.requireNonNull(operatorFactory);
+
+this.proxyOutput = new ProxyOutput<>(output);
+this.epochWatermarkSupplier = new EpochWatermarkSupplier();
+
+this.metrics = 
createOperatorMetricGroup(containingTask.getEnvironment(), streamConfig);
+
+this.progressTracker = ProgressTrackerFactory.create(streamConfig, 
containingTask, this);
+this.eventBroadcastOutput =
+BroadcastOutputFactory.createBroadcastOutput(
+output, 
metrics.getIOMetricGroup().getNumRecordsOutCounter());

Review comment:
   I hold a bit difference here since from the Web UI we could not hide the 
physical implementation, users could still see head operators, etc.. Perhaps we 
should keep the same physical principle that either we hide all the physical 
implementation or we do 

[GitHub] [flink] flinkbot edited a comment on pull request #17440: [FLINK-24468][runtime] Wait for the channel activation before creating partition request client

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17440:
URL: https://github.com/apache/flink/pull/17440#issuecomment-938816662


   
   ## CI report:
   
   * 7d56eb7d002b7bf5c268b98e12a6e4879cd68216 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25042)
 
   * 16b451eba79ef7015e19adf6f0eb7c8b39c70715 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17441: [FLINK-24461][table] PrintUtils common now accepts only internal data types

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17441:
URL: https://github.com/apache/flink/pull/17441#issuecomment-938905036


   
   ## CI report:
   
   * 5a10760f62b70db58a953e0d8e304bafa56a053e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24994)
 
   * e78df5a2a0572a2cfe043bf7737f1e52fe9f89af Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25188)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17510: [FLINK-24355][runtime] Expose the flag for enabling checkpoints after tasks finish in the Web UI

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17510:
URL: https://github.com/apache/flink/pull/17510#issuecomment-945683858


   
   ## CI report:
   
   * e5f8ec6be04661593cb2ea715c92bb0e8f75ca69 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25160)
 
   * d2d050bbae90c773386dcfaf77d3b794f8398599 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25187)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 279ce6cd083abc97b3be004ec78909783a700664 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25186)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17441: [FLINK-24461][table] PrintUtils common now accepts only internal data types

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17441:
URL: https://github.com/apache/flink/pull/17441#issuecomment-938905036


   
   ## CI report:
   
   * 5a10760f62b70db58a953e0d8e304bafa56a053e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24994)
 
   * e78df5a2a0572a2cfe043bf7737f1e52fe9f89af UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17510: [FLINK-24355][runtime] Expose the flag for enabling checkpoints after tasks finish in the Web UI

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17510:
URL: https://github.com/apache/flink/pull/17510#issuecomment-945683858


   
   ## CI report:
   
   * e5f8ec6be04661593cb2ea715c92bb0e8f75ca69 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25160)
 
   * d2d050bbae90c773386dcfaf77d3b794f8398599 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-24579) FlinkSQL does not print correctly content of arrays nested into maps

2021-10-18 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-24579:
---

 Summary: FlinkSQL does not print correctly content of arrays 
nested into maps
 Key: FLINK-24579
 URL: https://issues.apache.org/jira/browse/FLINK-24579
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.14.0
Reporter: Sergey Nuyanzin


Some samples of queries to reproduce
{code:sql}
select map[array[2], 1];
select map[1, array[2]];
select map[array['q'], array[2]];
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24578) Unexpected erratic load shape for channel skew load profile

2021-10-18 Thread Anton Kalashnikov (Jira)
Anton Kalashnikov created FLINK-24578:
-

 Summary: Unexpected erratic load shape for channel skew load 
profile
 Key: FLINK-24578
 URL: https://issues.apache.org/jira/browse/FLINK-24578
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.14.0
Reporter: Anton Kalashnikov
 Attachments: antiphaseBufferSize.png, erraticBufferSize1.png, 
erraticBufferSize2.png

given:

The job with 5 maps(with keyBy).

All channels are remote. Parallelism is 80

The first task produces only two keys - `indexOfThisSubtask` and 
`indexOfThisSubtask + 1`. So every subTask has a constant value of active 
channels(depends on hash rebalance)

Every record has an equal size and is processed for an equal time.

 

when: 

The buffer debloat is enabled with the default configuration.

 

then:

The buffer size synchonizes on every subTask on the first map for some reason. 
It can have the strong synchronization as shown on the erraticBufferSize1 
picture but usually synchronization is less explicit as on erraticBufferSize2.

!erraticBufferSize1.png!

 

Expected:

After the stabilization period the buffer size should be mostly constant with 
small fluctuation or the different tasks should be in antiphase to each 
other(when one subtask has small buffer size the another should have a big 
buffer size). for example the picture antiphaseBufferSize

!antiphaseBufferSize.png!

 

Unfortunatelly, it is not reproduced every time which means that this problem 
can be connected to environment. But at least, it makes sense to try to 
understand why we have so strange load shape when only several input channels 
are active.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   * 279ce6cd083abc97b3be004ec78909783a700664 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25186)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   * 279ce6cd083abc97b3be004ec78909783a700664 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17459: [FLINK-24397] Remove TableSchema usages from Flink connectors

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17459:
URL: https://github.com/apache/flink/pull/17459#issuecomment-940996979


   
   ## CI report:
   
   * 77238fc01c44ecbd5addf1f0dada9baa9366ffc4 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25167)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] slinkydeveloper commented on a change in pull request #17441: [FLINK-24461][table] PrintUtils common now accepts only internal data types

2021-10-18 Thread GitBox


slinkydeveloper commented on a change in pull request #17441:
URL: https://github.com/apache/flink/pull/17441#discussion_r731092407



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/CollectDynamicSink.java
##
@@ -115,6 +105,9 @@ public SinkRuntimeProvider getSinkRuntimeProvider(Context 
context) {
 externalSerializer,
 accumulatorName,
 checkpointConfig);
+this.converter = 
context.createDataStructureConverter(consumedDataType);
+this.converter.open(
+
RuntimeConverter.Context.create(config.getClass().getClassLoader()));

Review comment:
   Given this lambda runs in the same classloader of the table environment, 
i think using the current thread classloader just makes sense?
   
   > I would call the open() later.
   
   Do you have any specific location where you would call it?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   * 279ce6cd083abc97b3be004ec78909783a700664 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] nicoweidner commented on a change in pull request #17472: [FLINK-24486][rest] Make async result store duration configurable

2021-10-18 Thread GitBox


nicoweidner commented on a change in pull request #17472:
URL: https://github.com/apache/flink/pull/17472#discussion_r731087092



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/rest/handler/async/CompletedOperationCacheTest.java
##
@@ -107,4 +109,53 @@ public void testCanGetOperationResultAfterClosing() throws 
Exception {
 
 assertThat(result.right(), is(equalTo(TEST_OPERATION_RESULT.get(;
 }
+
+@Test
+public void testCacheTimeout() throws Exception {
+final Duration timeout = 
RestOptions.ASYNC_OPERATION_STORE_DURATION.defaultValue();
+
+completedOperationCache = new CompletedOperationCache<>(timeout, 
manualTicker);
+completedOperationCache.registerOngoingOperation(TEST_OPERATION_KEY, 
TEST_OPERATION_RESULT);
+
+// sanity check that the operation can be retrieved before the timeout
+assertThat(
+completedOperationCache.get(TEST_OPERATION_KEY).right(),
+is(equalTo(TEST_OPERATION_RESULT.get(;
+
+manualTicker.advanceTime(timeout.multipliedBy(2).getSeconds(), 
TimeUnit.SECONDS);
+
+try {
+completedOperationCache.get(TEST_OPERATION_KEY);
+fail("Timeout should have removed cache entry.");
+} catch (UnknownOperationKeyException expected) {
+}
+}
+
+@Test
+public void testCacheTimeoutCanBeDisabled() throws Exception {
+completedOperationCache =
+new CompletedOperationCache<>(Duration.ofSeconds(0), 
manualTicker);
+completedOperationCache.registerOngoingOperation(TEST_OPERATION_KEY, 
TEST_OPERATION_RESULT);
+
+manualTicker.advanceTime(365, TimeUnit.DAYS);
+
+assertThat(
+completedOperationCache.get(TEST_OPERATION_KEY).right(),
+is(equalTo(TEST_OPERATION_RESULT.get(;
+}
+
+@Test
+public void testCacheTimeoutCanBeConfigured() throws Exception {
+final Duration baseTImeout = 
RestOptions.ASYNC_OPERATION_STORE_DURATION.defaultValue();

Review comment:
   ```suggestion
   final Duration baseTimeout = 
RestOptions.ASYNC_OPERATION_STORE_DURATION.defaultValue();
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] slinkydeveloper commented on a change in pull request #17441: [FLINK-24461][table] PrintUtils common now accepts only internal data types

2021-10-18 Thread GitBox


slinkydeveloper commented on a change in pull request #17441:
URL: https://github.com/apache/flink/pull/17441#discussion_r731087036



##
File path: 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/utils/PrintUtilsTest.java
##
@@ -437,10 +463,28 @@ private ResolvedSchema getSchema() {
 RowKind.DELETE,
 null,
 -1,
--1,
+-1L,
 "これは日本語をテストするための文です",
 BigDecimal.valueOf(-12345.06789),
 Timestamp.valueOf("2020-03-04 18:39:14")));
-return data;
+return data.stream()

Review comment:
   This requires importing table-runtime as test dependency of 
table-common, I think we should keep as it is.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24480) EqualiserCodeGeneratorTest fails on azure

2021-10-18 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-24480.

Fix Version/s: 1.14.1
   1.15.0
   Resolution: Fixed

Fixed in master: 639824faccf8678a441d882e501e5044ff37c01b
Fixed in 1.14: 3f78a081aa563a8d935cc94a8d239b2208636271
Fixed in 1.13: a21ae65d2fb5eb648dc78716f88bd30fc54247ef
Fixed in 1.12: ef9f717b02d9b1abc4951248e4ef4d042a8ccb4b

> EqualiserCodeGeneratorTest fails on azure
> -
>
> Key: FLINK-24480
> URL: https://issues.apache.org/jira/browse/FLINK-24480
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.5, 1.13.2
>Reporter: Xintong Song
>Assignee: Ingo Bürk
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.6, 1.15.0, 1.14.1, 1.13.4
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24809=logs=955770d3-1fed-5a0a-3db6-0c7554c910cb=14447d61-56b4-5000-80c1-daa459247f6a=42615
> {code}
> Oct 07 01:11:46 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 8.236 s <<< FAILURE! - in 
> org.apache.flink.table.planner.codegen.EqualiserCodeGeneratorTest
> Oct 07 01:11:46 [ERROR] 
> testManyFields(org.apache.flink.table.planner.codegen.EqualiserCodeGeneratorTest)
>   Time elapsed: 8.21 s  <<< FAILURE!
> Oct 07 01:11:46 java.lang.AssertionError: Expected compilation to succeed
> Oct 07 01:11:46   at org.junit.Assert.fail(Assert.java:88)
> Oct 07 01:11:46   at 
> org.apache.flink.table.planner.codegen.EqualiserCodeGeneratorTest.testManyFields(EqualiserCodeGeneratorTest.java:102)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17493: [FLINK-24019][build][dist] Package Scala APIs separately

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17493:
URL: https://github.com/apache/flink/pull/17493#issuecomment-944057092


   
   ## CI report:
   
   * 138a13cae9d3fe5b8003f0a50e70cfcb4ee03d6c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25168)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   * 279ce6cd083abc97b3be004ec78909783a700664 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr closed pull request #17499: [FLINK-24480][table-planner] Reduce number of fields in test

2021-10-18 Thread GitBox


twalthr closed pull request #17499:
URL: https://github.com/apache/flink/pull/17499


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   * 279ce6cd083abc97b3be004ec78909783a700664 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on a change in pull request #17441: [FLINK-24461][table] PrintUtils common now accepts only internal data types

2021-10-18 Thread GitBox


twalthr commented on a change in pull request #17441:
URL: https://github.com/apache/flink/pull/17441#discussion_r731066521



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/CollectDynamicSink.java
##
@@ -115,6 +105,9 @@ public SinkRuntimeProvider getSinkRuntimeProvider(Context 
context) {
 externalSerializer,
 accumulatorName,
 checkpointConfig);
+this.converter = 
context.createDataStructureConverter(consumedDataType);
+this.converter.open(
+
RuntimeConverter.Context.create(config.getClass().getClassLoader()));

Review comment:
   ideally, the classloader of the session (table environment) is used 
here. currently, we don't have a central way of declaring the classloader in 
`EnvironmentSettings` when initializing the environment but this might happen 
soon. for now, we should use `Thread.currentThread().getContextClassloader()`. 
But we should prepare the stack already to pass the classloader from table 
environment to this location? If this is too complicated here, I would call the 
`open()` later.

##
File path: 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/utils/PrintUtilsTest.java
##
@@ -437,10 +463,28 @@ private ResolvedSchema getSchema() {
 RowKind.DELETE,
 null,
 -1,
--1,
+-1L,
 "これは日本語をテストするための文です",
 BigDecimal.valueOf(-12345.06789),
 Timestamp.valueOf("2020-03-04 18:39:14")));
-return data;
+return data.stream()

Review comment:
   use also `DataStructureConverter` here, similar to the other test with 
addressed feedback

##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/CollectDynamicSink.java
##
@@ -124,17 +117,86 @@ public SinkRuntimeProvider getSinkRuntimeProvider(Context 
context) {
 
 @Override
 public DynamicTableSink copy() {
-final CollectDynamicSink copy =
-new CollectDynamicSink(
-tableIdentifier, consumedDataType, maxBatchSize, 
socketTimeout);
-// kind of violates the contract of copy() but should not harm
-// as it is null during optimization anyway until physical translation
-copy.iterator = iterator;
-return copy;
+return new CollectDynamicSink(
+tableIdentifier, consumedDataType, maxBatchSize, 
socketTimeout);
 }
 
 @Override
 public String asSummaryString() {
 return String.format("TableToCollect(type=%s)", consumedDataType);
 }
+
+private final class CollectResultProvider implements ResultProvider {
+
+private CloseableRowIteratorWrapper rowDataIterator;
+private CloseableRowIteratorWrapper rowIterator;
+
+private void initialize() {
+if (this.rowIterator == null) {
+this.rowDataIterator =
+new CloseableRowIteratorWrapper<>(iterator, 
Function.identity());
+this.rowIterator =
+new CloseableRowIteratorWrapper<>(
+iterator, r -> (Row) converter.toExternal(r));
+}
+}
+
+@Override
+public ResultProvider setJobClient(JobClient jobClient) {
+iterator.setJobClient(jobClient);
+return this;
+}
+
+@Override
+public CloseableIterator toInternalIterator() {
+initialize();

Review comment:
   how about we introduce dedicated `initializeRowDataIterator()` and 
`initializeRowIterator()`? instead of instantiating both where one is unused.

##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/gateway/local/result/BaseMaterializedResultTest.java
##
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package 

[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   * 279ce6cd083abc97b3be004ec78909783a700664 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17511: [FLINK-24122][history] Add support to do clean in history server

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17511:
URL: https://github.com/apache/flink/pull/17511#issuecomment-945683958


   
   ## CI report:
   
   * e1142a17f72fc01067ec2582c71b2887795e82bb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25161)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   * 279ce6cd083abc97b3be004ec78909783a700664 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21884) Reduce TaskManager failure detection time

2021-10-18 Thread ZhuoYu Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430068#comment-17430068
 ] 

ZhuoYu Chen commented on FLINK-21884:
-

Hi [~rmetzger] , I am very interested in this,and I want do some job for 
flink,can I help to do that?
 Thank you

> Reduce TaskManager failure detection time
> -
>
> Key: FLINK-21884
> URL: https://issues.apache.org/jira/browse/FLINK-21884
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: reactive
> Fix For: 1.15.0
>
> Attachments: image-2021-03-19-20-10-40-324.png
>
>
> In Flink 1.13 (and older versions), TaskManager failures stall the processing 
> for a significant amount of time, even though the system gets indications for 
> the failure almost immediately through network connection losses.
> This is due to a high (default) heartbeat timeout of 50 seconds [1] to 
> accommodate for GC pauses, transient network disruptions or generally slow 
> environments (otherwise, we would unregister a healthy TaskManager).
> Such a high timeout can lead to disruptions in the processing (no processing 
> for certain periods, high latencies, buildup of consumer lag etc.). In 
> Reactive Mode (FLINK-10407), the issue surfaces on scale-down events, where 
> the loss of a TaskManager is immediately visible in the logs, but the job is 
> stuck in "FAILING" for quite a while until the TaskManger is really 
> deregistered. (Note that this issue is not that critical in a autoscaling 
> setup, because Flink can control the scale-down events and trigger them 
> proactively)
> On the attached metrics dashboard, one can see that the job has significant 
> throughput drops / consumer lags during scale down (and also CPU usage spikes 
> on processing the queued events, leading to incorrect scale up events again).
>  !image-2021-03-19-20-10-40-324.png|thumbnail!
> One idea to solve this problem is to:
> - Score TaskManagers based on certain signals (# exceptions reported, 
> exception types (connection losses, akka failures), failure frequencies,  
> ...) and blacklist them accordingly.
> - Introduce a best-effort TaskManager unregistration mechanism: When a 
> TaskManager receives a sigterm, it sends a final message to the JobManager 
> saying "goodbye", and the JobManager can immediately remove the TM from its 
> bookkeeping.
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/config/#heartbeat-timeout



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   * 279ce6cd083abc97b3be004ec78909783a700664 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] gaoyunhaii commented on a change in pull request #8: [FLINK-4][iteration] Add operator wrapper for all-round iterations.

2021-10-18 Thread GitBox


gaoyunhaii commented on a change in pull request #8:
URL: https://github.com/apache/flink-ml/pull/8#discussion_r731047886



##
File path: 
flink-ml-iteration/src/main/java/org/apache/flink/ml/iteration/operator/AbstractWrapperOperator.java
##
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.iteration.operator;
+
+import org.apache.flink.ml.iteration.IterationRecord;
+import org.apache.flink.ml.iteration.broadcast.BroadcastOutput;
+import org.apache.flink.ml.iteration.broadcast.BroadcastOutputFactory;
+import org.apache.flink.ml.iteration.progresstrack.ProgressTracker;
+import org.apache.flink.ml.iteration.progresstrack.ProgressTrackerFactory;
+import org.apache.flink.ml.iteration.progresstrack.ProgressTrackerListener;
+import org.apache.flink.ml.iteration.proxy.ProxyOutput;
+import org.apache.flink.runtime.execution.Environment;
+import org.apache.flink.runtime.metrics.groups.OperatorMetricGroup;
+import org.apache.flink.runtime.metrics.groups.UnregisteredMetricGroups;
+import org.apache.flink.streaming.api.graph.StreamConfig;
+import org.apache.flink.streaming.api.operators.BoundedOneInput;
+import org.apache.flink.streaming.api.operators.Output;
+import org.apache.flink.streaming.api.operators.StreamOperator;
+import org.apache.flink.streaming.api.operators.StreamOperatorFactory;
+import org.apache.flink.streaming.api.operators.StreamOperatorParameters;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.streaming.runtime.tasks.StreamTask;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Objects;
+import java.util.function.Supplier;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+/** The base class of all the wrapper operators. It provides the alignment 
functionality. */
+public abstract class AbstractWrapperOperator
+implements StreamOperator>, 
ProgressTrackerListener, BoundedOneInput {

Review comment:
   Yes, we should leave `BoundedOneInput` to 
`OneInputAllRoundWrapperOperator` and leave `BoundedMultiInput` to the other 
two. 
   
   Flink currently do not support operators that implements both 
`BoundedOneInput` and `BoundedMultipleInput`. If so, only one interface would 
take effective. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   * 279ce6cd083abc97b3be004ec78909783a700664 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17510: [FLINK-24355][runtime] Expose the flag for enabling checkpoints after tasks finish in the Web UI

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17510:
URL: https://github.com/apache/flink/pull/17510#issuecomment-945683858


   
   ## CI report:
   
   * e5f8ec6be04661593cb2ea715c92bb0e8f75ca69 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25160)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #16606:
URL: https://github.com/apache/flink/pull/16606#issuecomment-887431748


   
   ## CI report:
   
   * e55e39b5d853dca996b59212c2c8d9899a9b5a81 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25162)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr closed pull request #17343: [hotfix][docs] Various doc enhancements around filesystem table connector and json

2021-10-18 Thread GitBox


twalthr closed pull request #17343:
URL: https://github.com/apache/flink/pull/17343


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] gaoyunhaii commented on a change in pull request #8: [FLINK-4][iteration] Add operator wrapper for all-round iterations.

2021-10-18 Thread GitBox


gaoyunhaii commented on a change in pull request #8:
URL: https://github.com/apache/flink-ml/pull/8#discussion_r731033300



##
File path: 
flink-ml-iteration/src/main/java/org/apache/flink/ml/iteration/proxy/ProxyOutput.java
##
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.iteration.proxy;
+
+import org.apache.flink.ml.iteration.IterationRecord;
+import org.apache.flink.ml.iteration.typeinfo.IterationRecordTypeInfo;
+import org.apache.flink.streaming.api.operators.Output;
+import org.apache.flink.streaming.api.watermark.Watermark;
+import org.apache.flink.streaming.runtime.streamrecord.LatencyMarker;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.util.OutputTag;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/** Proxy output to provide to the wrapped operator. */
+public class ProxyOutput implements Output> {
+
+private final Output>> output;
+
+private final StreamRecord> reuseRecord;
+
+private final Map sideOutputCaches = new 
HashMap<>();
+
+private Integer contextRound;
+
+public ProxyOutput(Output>> output) {
+this.output = Objects.requireNonNull(output);
+this.reuseRecord = new StreamRecord<>(IterationRecord.newRecord(null, 
0));
+}
+
+public void setContextRound(Integer contextRound) {
+this.contextRound = contextRound;
+}
+
+@Override
+public void emitWatermark(Watermark mark) {
+output.emitWatermark(mark);
+}
+
+@Override
+@SuppressWarnings({"unchecked", "rawtypes"})
+public  void collect(OutputTag outputTag, StreamRecord record) {
+SideOutputCache sideOutputCache =
+sideOutputCaches.computeIfAbsent(
+outputTag.getId(),
+(ignored) ->
+new SideOutputCache(
+new OutputTag>(
+outputTag.getId(),
+new IterationRecordTypeInfo(
+
outputTag.getTypeInfo())),
+new 
StreamRecord<>(IterationRecord.newRecord(null, 0;
+sideOutputCache.cachedRecord.replace(
+IterationRecord.newRecord(record.getValue(), contextRound), 
record.getTimestamp());
+output.collect(sideOutputCache.tag, sideOutputCache.cachedRecord);
+}
+
+@Override
+public void emitLatencyMarker(LatencyMarker latencyMarker) {
+output.emitLatencyMarker(latencyMarker);
+}
+
+@Override
+public void collect(StreamRecord tStreamRecord) {
+reuseRecord.getValue().setValue(tStreamRecord.getValue());
+reuseRecord.getValue().setRound(contextRound);

Review comment:
   There might be async algorithm that multiple rounds will be execute in 
parallel, in this case records from different epochs would be mixed. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-24539) ChangelogNormalize operator tooks too long time to INITIALIZING until failed

2021-10-18 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430063#comment-17430063
 ] 

Timo Walther edited comment on FLINK-24539 at 10/18/21, 3:12 PM:
-

[~vmaster] {{ChangelogNormalize}} stores the entire input table in state in 
order normalize the incoming change messages. It is a very expensive operator. 
Are all 500 million rows processed by a single node or how is your data 
partitioned? I see you are only using a parallelism of 2?


was (Author: twalthr):
[~vmaster] {{ChangelogNormalize}} stores the entire input table in state in 
order normalize the incoming change messages. It is a very expensive operator. 
Are all 500 million rows processed by a single node or how is your data 
partitioned?

> ChangelogNormalize operator tooks too long time to INITIALIZING until failed
> 
>
> Key: FLINK-24539
> URL: https://issues.apache.org/jira/browse/FLINK-24539
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Task, Table SQL / 
> Runtime
>Affects Versions: 1.13.1
> Environment: Flink version :1.13.1
> TaskManager memory:
> !image-2021-10-14-13-36-56-899.png|width=578,height=318!
> JobManager memory:
> !image-2021-10-14-13-37-51-445.png|width=578,height=229!
>Reporter: vmaster.cc
>Priority: Major
> Attachments: image-2021-10-14-13-19-08-215.png, 
> image-2021-10-14-13-36-56-899.png, image-2021-10-14-13-37-51-445.png, 
> image-2021-10-14-14-13-13-370.png, image-2021-10-14-14-15-40-101.png, 
> image-2021-10-14-14-16-33-080.png, 
> taskmanager_container_e11_1631768043929_0012_01_04_log.txt
>
>
> I'm using debezium to produce cdc from mysql, considering its at least one 
> delivery, so i must set the config 
> 'table.exec.source.cdc-events-duplicate=true'.
> But when some unknown case make my task down, flink task restart  failed 
> always. I found that ChangelogNormalize operator tooks too long time in 
> INITIALIZING stage.
>  
> screenshot and log fragment are as follows:
> !image-2021-10-14-13-19-08-215.png|width=567,height=293!
>  
> {code:java}
> 2021-10-14 12:32:33,660 INFO  
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackendBuilder [] - 
> Finished building RocksDB keyed state-backend at 
> /data3/yarn/nm/usercache/flink/appcache/application_1631768043929_0012/flink-io-f31735c3-e726-4c49-89a5-916670809b7a/job_7734977994a6a10f7cc784d50e4a1a34_op_KeyedProcessOperator_dc2290bb6f8f5cd2bd425368843494fe__1_1__uuid_6cbbe6ae-f43e-4d2a-b1fb-f0cb71f257af.2021-10-14
>  12:32:33,662 INFO  org.apache.flink.runtime.taskmanager.Task 
>[] - GroupAggregate(groupBy=[teacher_id, create_day], select=[teacher_id, 
> create_day, SUM_RETRACT($f2) AS teacher_courseware_count]) -> 
> Calc(select=[teacher_id, create_day, CAST(teacher_courseware_count) AS 
> teacher_courseware_count]) -> NotNullEnforcer(fields=[teacher_id, 
> create_day]) (1/1)#143 (9cca3ef1293cc6364698381bbda93998) switched from 
> INITIALIZING to RUNNING.2021-10-14 12:38:07,581 INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Ignoring 
> checkpoint aborted notification for non-running task 
> ChangelogNormalize(key=[c_id]) -> Calc(select=[c_author_id AS teacher_id, 
> DATE_FORMAT(c_create_time, _UTF-16LE'-MM-dd') AS create_day, IF((c_state 
> = 10), 1, 0) AS $f2], where=[((c_is_group = 0) AND (c_author_id <> 
> _UTF-16LE'':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"))]) 
> (1/1)#143.2021-10-14 12:38:07,581 INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Attempting 
> to cancel task Sink: 
> Sink(table=[default_catalog.default_database.t_flink_school_teacher_courseware_count],
>  fields=[teacher_id, create_day, teacher_courseware_count]) (2/2)#143 
> (cc25f9ae49c4db01ab40ff103fae43fd).2021-10-14 12:38:07,581 INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Sink: 
> Sink(table=[default_catalog.default_database.t_flink_school_teacher_courseware_count],
>  fields=[teacher_id, create_day, teacher_courseware_count]) (2/2)#143 
> (cc25f9ae49c4db01ab40ff103fae43fd) switched from RUNNING to 
> CANCELING.2021-10-14 12:38:07,581 INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Triggering 
> cancellation of task code Sink: 
> Sink(table=[default_catalog.default_database.t_flink_school_teacher_courseware_count],
>  fields=[teacher_id, create_day, teacher_courseware_count]) (2/2)#143 
> (cc25f9ae49c4db01ab40ff103fae43fd).2021-10-14 12:38:07,583 INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Attempting 
> to cancel task Sink: 
> Sink(table=[default_catalog.default_database.t_flink_school_teacher_courseware_count],
>  fields=[teacher_id, create_day, 

[GitHub] [flink] flinkbot edited a comment on pull request #17474: [FLINK-18312] Add caching for savepoint operations to Dispatcher

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17474:
URL: https://github.com/apache/flink/pull/17474#issuecomment-943148500


   
   ## CI report:
   
   * 9b955f3b9d31e3cfc411bd763ace3a0fb916b3c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25178)
 
   * 279ce6cd083abc97b3be004ec78909783a700664 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] gaoyunhaii commented on a change in pull request #8: [FLINK-4][iteration] Add operator wrapper for all-round iterations.

2021-10-18 Thread GitBox


gaoyunhaii commented on a change in pull request #8:
URL: https://github.com/apache/flink-ml/pull/8#discussion_r731032333



##
File path: 
flink-ml-iteration/src/main/java/org/apache/flink/ml/iteration/progresstrack/ProgressTracker.java
##
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.iteration.progresstrack;
+
+import org.apache.flink.annotation.VisibleForTesting;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Tracks the epoch watermark from each input. Once the minimum epoch 
watermark changed, it would
+ * notify the listener.
+ */
+public class ProgressTracker {
+
+private final ProgressTrackerListener progressTrackerListener;
+
+private final List inputStatuses;
+
+private final LowerBoundMaintainer allInputsLowerBound;
+
+public ProgressTracker(
+int[] numberOfChannels, ProgressTrackerListener 
progressTrackerListener) {
+checkState(numberOfChannels != null && numberOfChannels.length >= 1);
+this.progressTrackerListener = checkNotNull(progressTrackerListener);
+
+this.inputStatuses = new ArrayList<>(numberOfChannels.length);
+for (int numberOfChannel : numberOfChannels) {
+inputStatuses.add(new InputStatus(numberOfChannel));
+}
+
+this.allInputsLowerBound = new 
LowerBoundMaintainer(numberOfChannels.length);
+}
+
+public void onEpochWatermark(int inputIndex, String sender, int 
epochWatermark)
+throws IOException {
+InputStatus inputStatus = inputStatuses.get(inputIndex);
+inputStatus.onUpdate(sender, epochWatermark);
+
+if (inputStatus.getInputLowerBound() > 
allInputsLowerBound.getValue(inputIndex)) {
+int oldLowerBound = allInputsLowerBound.getLowerBound();
+allInputsLowerBound.updateValue(inputIndex, 
inputStatus.getInputLowerBound());
+if (allInputsLowerBound.getLowerBound() > oldLowerBound) {
+progressTrackerListener.onEpochWatermarkIncrement(
+allInputsLowerBound.getLowerBound());
+}
+}
+}
+
+@VisibleForTesting
+int[] getNumberOfInputs() {
+return inputStatuses.stream()
+.mapToInt(inputStatus -> inputStatus.numberOfChannels)
+.toArray();
+}
+
+private static class InputStatus {
+private final int numberOfChannels;
+private final Map senderIndices;
+private final LowerBoundMaintainer allChannelsLowerBound;
+
+public InputStatus(int numberOfChannels) {
+this.numberOfChannels = numberOfChannels;
+this.senderIndices = new HashMap<>(numberOfChannels);
+this.allChannelsLowerBound = new 
LowerBoundMaintainer(numberOfChannels);
+}
+
+public void onUpdate(String sender, int epochWatermark) {
+int index = senderIndices.computeIfAbsent(sender, k -> 
senderIndices.size());
+checkState(index < numberOfChannels);
+
+allChannelsLowerBound.updateValue(index, epochWatermark);
+}
+
+public int getInputLowerBound() {
+return allChannelsLowerBound.getLowerBound();
+}
+}
+
+private static class LowerBoundMaintainer {
+
+private final int[] values;

Review comment:
   I hold a bit different opinion: it might be not very possible for an 
algorithm to iterates Integer.MAX_VALUE times before convergence ? Using `long` 
would also bring overhead for record transmission through network. 
   
   We may revisit this decision after we met the actual scenarios. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24539) ChangelogNormalize operator tooks too long time to INITIALIZING until failed

2021-10-18 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17430063#comment-17430063
 ] 

Timo Walther commented on FLINK-24539:
--

[~vmaster] {{ChangelogNormalize}} stores the entire input table in state in 
order normalize the incoming change messages. It is a very expensive operator. 
Are all 500 million rows processed by a single node or how is your data 
partitioned?

> ChangelogNormalize operator tooks too long time to INITIALIZING until failed
> 
>
> Key: FLINK-24539
> URL: https://issues.apache.org/jira/browse/FLINK-24539
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Task, Table SQL / 
> Runtime
>Affects Versions: 1.13.1
> Environment: Flink version :1.13.1
> TaskManager memory:
> !image-2021-10-14-13-36-56-899.png|width=578,height=318!
> JobManager memory:
> !image-2021-10-14-13-37-51-445.png|width=578,height=229!
>Reporter: vmaster.cc
>Priority: Major
> Attachments: image-2021-10-14-13-19-08-215.png, 
> image-2021-10-14-13-36-56-899.png, image-2021-10-14-13-37-51-445.png, 
> image-2021-10-14-14-13-13-370.png, image-2021-10-14-14-15-40-101.png, 
> image-2021-10-14-14-16-33-080.png, 
> taskmanager_container_e11_1631768043929_0012_01_04_log.txt
>
>
> I'm using debezium to produce cdc from mysql, considering its at least one 
> delivery, so i must set the config 
> 'table.exec.source.cdc-events-duplicate=true'.
> But when some unknown case make my task down, flink task restart  failed 
> always. I found that ChangelogNormalize operator tooks too long time in 
> INITIALIZING stage.
>  
> screenshot and log fragment are as follows:
> !image-2021-10-14-13-19-08-215.png|width=567,height=293!
>  
> {code:java}
> 2021-10-14 12:32:33,660 INFO  
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackendBuilder [] - 
> Finished building RocksDB keyed state-backend at 
> /data3/yarn/nm/usercache/flink/appcache/application_1631768043929_0012/flink-io-f31735c3-e726-4c49-89a5-916670809b7a/job_7734977994a6a10f7cc784d50e4a1a34_op_KeyedProcessOperator_dc2290bb6f8f5cd2bd425368843494fe__1_1__uuid_6cbbe6ae-f43e-4d2a-b1fb-f0cb71f257af.2021-10-14
>  12:32:33,662 INFO  org.apache.flink.runtime.taskmanager.Task 
>[] - GroupAggregate(groupBy=[teacher_id, create_day], select=[teacher_id, 
> create_day, SUM_RETRACT($f2) AS teacher_courseware_count]) -> 
> Calc(select=[teacher_id, create_day, CAST(teacher_courseware_count) AS 
> teacher_courseware_count]) -> NotNullEnforcer(fields=[teacher_id, 
> create_day]) (1/1)#143 (9cca3ef1293cc6364698381bbda93998) switched from 
> INITIALIZING to RUNNING.2021-10-14 12:38:07,581 INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Ignoring 
> checkpoint aborted notification for non-running task 
> ChangelogNormalize(key=[c_id]) -> Calc(select=[c_author_id AS teacher_id, 
> DATE_FORMAT(c_create_time, _UTF-16LE'-MM-dd') AS create_day, IF((c_state 
> = 10), 1, 0) AS $f2], where=[((c_is_group = 0) AND (c_author_id <> 
> _UTF-16LE'':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"))]) 
> (1/1)#143.2021-10-14 12:38:07,581 INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Attempting 
> to cancel task Sink: 
> Sink(table=[default_catalog.default_database.t_flink_school_teacher_courseware_count],
>  fields=[teacher_id, create_day, teacher_courseware_count]) (2/2)#143 
> (cc25f9ae49c4db01ab40ff103fae43fd).2021-10-14 12:38:07,581 INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Sink: 
> Sink(table=[default_catalog.default_database.t_flink_school_teacher_courseware_count],
>  fields=[teacher_id, create_day, teacher_courseware_count]) (2/2)#143 
> (cc25f9ae49c4db01ab40ff103fae43fd) switched from RUNNING to 
> CANCELING.2021-10-14 12:38:07,581 INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Triggering 
> cancellation of task code Sink: 
> Sink(table=[default_catalog.default_database.t_flink_school_teacher_courseware_count],
>  fields=[teacher_id, create_day, teacher_courseware_count]) (2/2)#143 
> (cc25f9ae49c4db01ab40ff103fae43fd).2021-10-14 12:38:07,583 INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Attempting 
> to cancel task Sink: 
> Sink(table=[default_catalog.default_database.t_flink_school_teacher_courseware_count],
>  fields=[teacher_id, create_day, teacher_courseware_count]) (1/2)#143 
> (5419f41a3f0cc6c2f3f4c82c87f4ae22).2021-10-14 12:38:07,583 INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Sink: 
> Sink(table=[default_catalog.default_database.t_flink_school_teacher_courseware_count],
>  fields=[teacher_id, create_day, teacher_courseware_count]) (1/2)#143 
> 

[GitHub] [flink] fapaul commented on a change in pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…

2021-10-18 Thread GitBox


fapaul commented on a change in pull request #17345:
URL: https://github.com/apache/flink/pull/17345#discussion_r731023148



##
File path: 
flink-connectors/flink-connector-aws/src/test/java/org/apache/flink/streaming/connectors/kinesis/async/testutils/KinesaliteContainer.java
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.kinesis.async.testutils;
+
+import org.rnorth.ducttape.unreliables.Unreliables;
+import org.testcontainers.containers.GenericContainer;
+import org.testcontainers.containers.wait.strategy.AbstractWaitStrategy;
+import org.testcontainers.utility.DockerImageName;
+import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
+import software.amazon.awssdk.http.SdkHttpConfigurationOption;
+import software.amazon.awssdk.http.async.SdkAsyncHttpClient;
+import software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient;
+import software.amazon.awssdk.regions.Region;
+import software.amazon.awssdk.services.kinesis.KinesisAsyncClient;
+import software.amazon.awssdk.services.kinesis.model.ListStreamsResponse;
+import software.amazon.awssdk.utils.AttributeMap;
+
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * A {@code org.testcontainers} based on Kinesalite.
+ *
+ * Note that the more obvious localstack container with Kinesis took 1 
minute to start vs 3
+ * seconds of Kinesalite.
+ *
+ * TODO: THIS IMPLEMENTATION OCCASIONALLY FAILS ON THE BUILD SERVER. 
***STOP*** FIX THIS BEFORE
+ * MERGING. THIS IMPLEMENTATION IS BASED ON THE KINESIS TEST CONTAINER AT 
{@code
+ * 
org.apache.flink.streaming.connectors.kinesis.testutils.KinesaliteContainer} 
WHICH IS ALSO BROKEN
+ * IN THE SAME WAY. TESTS USING THAT CONTAINER ARE CURRENTLY BEING IGNORED. 
THE FLINK TASK ASSIGNED
+ * TO THAT @IGNORE DOES NOT ADDRESS THIS ISSUE, BUT SOME OTHER ISSUE.

Review comment:
   What is the current plan behind this comment? My previous comment was 
marked as resolved but I still think we should only have one 
`KinesaliteContainer` in the code-base.

##
File path: 
flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/AsyncSinkBaseBuilder.java
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.base.sink;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.connector.base.sink.writer.ElementConverter;
+
+import java.io.Serializable;
+
+/**
+ * Abstract builder for constructing a concrete implementation of {@link 
AsyncSinkBase}.
+ *
+ * @param  type of elements that should be persisted in the destination
+ * @param  type of payload that contains the element and 
additional metadata that is
+ * required to submit a single element to the destination
+ * @param  type of concrete implementation of this builder 
class
+ */
+@Internal

Review comment:
   Shouldn't this be some kind of public annotation? I'd expect other sink 
developers to interact with this class.

##
File path: 
flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/AsyncSinkBase.java
##
@@ -49,6 +50,28 @@
 public abstract class AsyncSinkBase
 implements Sink, Void> {
 
+protected final ElementConverter elementConverter;
+protected final Integer 

[GitHub] [flink-ml] gaoyunhaii commented on a change in pull request #8: [FLINK-4][iteration] Add operator wrapper for all-round iterations.

2021-10-18 Thread GitBox


gaoyunhaii commented on a change in pull request #8:
URL: https://github.com/apache/flink-ml/pull/8#discussion_r731030286



##
File path: 
flink-ml-iteration/src/main/java/org/apache/flink/ml/iteration/EpochWatermarkAware.java
##
@@ -0,0 +1,30 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.iteration;
+
+import java.util.function.Supplier;
+
+/**
+ * Operators or UDF implements this method would be provided with an supplier 
that provides the
+ * current rounds of the current element.
+ */
+public interface EpochWatermarkAware {
+
+void setEpochWatermarkSupplier(Supplier epochWatermarkSupplier);

Review comment:
   Sorry this class should be `EpochAware`. It is used to provide the epoch 
of current record.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17512: [FLINK-24563][table-planner][test]

2021-10-18 Thread GitBox


flinkbot edited a comment on pull request #17512:
URL: https://github.com/apache/flink/pull/17512#issuecomment-945871287


   
   ## CI report:
   
   * 596c788e5cc83b2580eb6d338706db135b4ffb1f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25179)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24454) Consolidate all CAST related test into the new CastFunctionITCase class

2021-10-18 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-24454:

Description: 
Check and possibly move all related CAST tests into one place, i.e.:

*DecimalCastTests,*

*CalcITCase,*

*ScalarOperatorsTest.scala,*

*etc*

  was:
Check and possibly move all related CAST tests into one place, i.e.:

*DecimalCastTests,*

*CalcITCase,*

*etc*


> Consolidate all CAST related test into the new CastFunctionITCase class
> ---
>
> Key: FLINK-24454
> URL: https://issues.apache.org/jira/browse/FLINK-24454
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>
> Check and possibly move all related CAST tests into one place, i.e.:
> *DecimalCastTests,*
> *CalcITCase,*
> *ScalarOperatorsTest.scala,*
> *etc*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #17512: [FLINK-24563][table-planner][test]

2021-10-18 Thread GitBox


flinkbot commented on pull request #17512:
URL: https://github.com/apache/flink/pull/17512#issuecomment-945872777


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 596c788e5cc83b2580eb6d338706db135b4ffb1f (Mon Oct 18 
15:07:11 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] gaoyunhaii commented on a change in pull request #8: [FLINK-4][iteration] Add operator wrapper for all-round iterations.

2021-10-18 Thread GitBox


gaoyunhaii commented on a change in pull request #8:
URL: https://github.com/apache/flink-ml/pull/8#discussion_r731026886



##
File path: 
flink-ml-iteration/src/main/java/org/apache/flink/ml/iteration/operator/allround/MultipleInputAllRoundWrapperOperator.java
##
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.iteration.operator.allround;
+
+import org.apache.flink.ml.iteration.IterationRecord;
+import org.apache.flink.streaming.api.operators.Input;
+import org.apache.flink.streaming.api.operators.MultipleInputStreamOperator;
+import org.apache.flink.streaming.api.operators.StreamOperatorFactory;
+import org.apache.flink.streaming.api.operators.StreamOperatorParameters;
+import org.apache.flink.streaming.api.watermark.Watermark;
+import org.apache.flink.streaming.runtime.streamrecord.LatencyMarker;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/** All-round wrapper for the multiple-inputs operator. */
+public class MultipleInputAllRoundWrapperOperator
+extends AbstractAllRoundWrapperOperator>
+implements MultipleInputStreamOperator> {
+
+public MultipleInputAllRoundWrapperOperator(
+StreamOperatorParameters> parameters,
+StreamOperatorFactory operatorFactory) {
+super(parameters, operatorFactory);
+}
+
+private  void processElement(
+int inputIndex,
+Input input,
+StreamRecord reusedInput,
+StreamRecord> element)
+throws Exception {
+switch (element.getValue().getType()) {
+case RECORD:
+reusedInput.replace(element.getValue().getValue(), 
element.getTimestamp());
+setIterationContextRound(element.getValue().getRound());
+input.processElement(reusedInput);
+clearIterationContextRound();
+break;
+case EPOCH_WATERMARK:
+onEpochWatermarkEvent(inputIndex, element.getValue());
+break;
+default:
+throw new FlinkRuntimeException("Not supported iteration 
record type: " + element);
+}
+}
+
+@Override
+@SuppressWarnings({"unchecked", "rawtypes"})
+public List getInputs() {
+List proxyInputs = new ArrayList<>();
+for (int i = 0; i < wrappedOperator.getInputs().size(); ++i) {
+// TODO: Note that here we relies on the assumption that the
+// stream graph generator labels the input from 1 to n for
+// the input array, which we map them from 0 to n - 1.
+proxyInputs.add(new ProxyInput(i));
+}
+return proxyInputs;
+}
+
+public class ProxyInput implements Input> {

Review comment:
   In the original flink implementation `Input` is only explicitly used in 
`MultipleInputStreamOperator`, thus here the `ProxyInput` is also only used in 
`MultipleInputStreamOperator`. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17512: [FLINK-24563][table-planner][test]

2021-10-18 Thread GitBox


flinkbot commented on pull request #17512:
URL: https://github.com/apache/flink/pull/17512#issuecomment-945871287


   
   ## CI report:
   
   * 596c788e5cc83b2580eb6d338706db135b4ffb1f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >