[jira] [Updated] (FLINK-19865) YARN tests failed with "java.lang.NumberFormatException: For input string: "${env:MAX_LOG_FILE_NUMBER}"

2020-10-30 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19865:

Priority: Blocker  (was: Critical)

> YARN tests failed with "java.lang.NumberFormatException: For input string: 
> "${env:MAX_LOG_FILE_NUMBER}"
> ---
>
> Key: FLINK-19865
> URL: https://issues.apache.org/jira/browse/FLINK-19865
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354
> {code}
> 2020-10-28T22:58:39.4927767Z 2020-10-28 22:57:33,866 main ERROR Could not 
> create plugin of type class 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy for 
> element DefaultRolloverStrategy: java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For 
> input string: "${env:MAX_LOG_FILE_NUMBER}"
> 2020-10-28T22:58:39.4929252Z  at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> 2020-10-28T22:58:39.4929823Z  at java.lang.Integer.parseInt(Integer.java:569)
> 2020-10-28T22:58:39.4930327Z  at java.lang.Integer.parseInt(Integer.java:615)
> 2020-10-28T22:58:39.4931047Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:137)
> 2020-10-28T22:58:39.4931866Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:90)
> 2020-10-28T22:58:39.4932720Z  at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122)
> 2020-10-28T22:58:39.4933446Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1002)
> 2020-10-28T22:58:39.4934275Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:942)
> 2020-10-28T22:58:39.4935029Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4935837Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4936605Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:552)
> 2020-10-28T22:58:39.4937573Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:241)
> 2020-10-28T22:58:39.4938429Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:288)
> 2020-10-28T22:58:39.4939206Z  at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:579)
> 2020-10-28T22:58:39.4939885Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651)
> 2020-10-28T22:58:39.4940490Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668)
> 2020-10-28T22:58:39.4941087Z  at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253)
> 2020-10-28T22:58:39.4941733Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
> 2020-10-28T22:58:39.4942534Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
> 2020-10-28T22:58:39.4943154Z  at 
> org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
> 2020-10-28T22:58:39.4943820Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138)
> 2020-10-28T22:58:39.4944540Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45)
> 2020-10-28T22:58:39.4945199Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48)
> 2020-10-28T22:58:39.4945858Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30)
> 2020-10-28T22:58:39.4946426Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
> 2020-10-28T22:58:39.4946965Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
> 2020-10-28T22:58:39.4947698Z  at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.(ClusterEntrypoint.java:108)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19865) YARN tests failed with "java.lang.NumberFormatException: For input string: "${env:MAX_LOG_FILE_NUMBER}"

2020-10-30 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223983#comment-17223983
 ] 

Dian Fu commented on FLINK-19865:
-

Upgrade to "Blocker" as this test case is continuously failing in Hadoop 3.1.3.

> YARN tests failed with "java.lang.NumberFormatException: For input string: 
> "${env:MAX_LOG_FILE_NUMBER}"
> ---
>
> Key: FLINK-19865
> URL: https://issues.apache.org/jira/browse/FLINK-19865
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354
> {code}
> 2020-10-28T22:58:39.4927767Z 2020-10-28 22:57:33,866 main ERROR Could not 
> create plugin of type class 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy for 
> element DefaultRolloverStrategy: java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For 
> input string: "${env:MAX_LOG_FILE_NUMBER}"
> 2020-10-28T22:58:39.4929252Z  at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> 2020-10-28T22:58:39.4929823Z  at java.lang.Integer.parseInt(Integer.java:569)
> 2020-10-28T22:58:39.4930327Z  at java.lang.Integer.parseInt(Integer.java:615)
> 2020-10-28T22:58:39.4931047Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:137)
> 2020-10-28T22:58:39.4931866Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:90)
> 2020-10-28T22:58:39.4932720Z  at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122)
> 2020-10-28T22:58:39.4933446Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1002)
> 2020-10-28T22:58:39.4934275Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:942)
> 2020-10-28T22:58:39.4935029Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4935837Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4936605Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:552)
> 2020-10-28T22:58:39.4937573Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:241)
> 2020-10-28T22:58:39.4938429Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:288)
> 2020-10-28T22:58:39.4939206Z  at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:579)
> 2020-10-28T22:58:39.4939885Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651)
> 2020-10-28T22:58:39.4940490Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668)
> 2020-10-28T22:58:39.4941087Z  at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253)
> 2020-10-28T22:58:39.4941733Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
> 2020-10-28T22:58:39.4942534Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
> 2020-10-28T22:58:39.4943154Z  at 
> org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
> 2020-10-28T22:58:39.4943820Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138)
> 2020-10-28T22:58:39.4944540Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45)
> 2020-10-28T22:58:39.4945199Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48)
> 2020-10-28T22:58:39.4945858Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30)
> 2020-10-28T22:58:39.4946426Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
> 2020-10-28T22:58:39.4946965Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
> 2020-10-28T22:58:39.4947698Z  at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.(ClusterEntrypoint.java:108)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19865) YARN tests failed with "java.lang.NumberFormatException: For input string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For input string: "${env:MAX_L

2020-10-30 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223981#comment-17223981
 ] 

Dian Fu commented on FLINK-19865:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8698=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354

> YARN tests failed with "java.lang.NumberFormatException: For input string: 
> "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}""
> 
>
> Key: FLINK-19865
> URL: https://issues.apache.org/jira/browse/FLINK-19865
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354
> {code}
> 2020-10-28T22:58:39.4927767Z 2020-10-28 22:57:33,866 main ERROR Could not 
> create plugin of type class 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy for 
> element DefaultRolloverStrategy: java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For 
> input string: "${env:MAX_LOG_FILE_NUMBER}"
> 2020-10-28T22:58:39.4929252Z  at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> 2020-10-28T22:58:39.4929823Z  at java.lang.Integer.parseInt(Integer.java:569)
> 2020-10-28T22:58:39.4930327Z  at java.lang.Integer.parseInt(Integer.java:615)
> 2020-10-28T22:58:39.4931047Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:137)
> 2020-10-28T22:58:39.4931866Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:90)
> 2020-10-28T22:58:39.4932720Z  at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122)
> 2020-10-28T22:58:39.4933446Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1002)
> 2020-10-28T22:58:39.4934275Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:942)
> 2020-10-28T22:58:39.4935029Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4935837Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4936605Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:552)
> 2020-10-28T22:58:39.4937573Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:241)
> 2020-10-28T22:58:39.4938429Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:288)
> 2020-10-28T22:58:39.4939206Z  at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:579)
> 2020-10-28T22:58:39.4939885Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651)
> 2020-10-28T22:58:39.4940490Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668)
> 2020-10-28T22:58:39.4941087Z  at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253)
> 2020-10-28T22:58:39.4941733Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
> 2020-10-28T22:58:39.4942534Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
> 2020-10-28T22:58:39.4943154Z  at 
> org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
> 2020-10-28T22:58:39.4943820Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138)
> 2020-10-28T22:58:39.4944540Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45)
> 2020-10-28T22:58:39.4945199Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48)
> 2020-10-28T22:58:39.4945858Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30)
> 2020-10-28T22:58:39.4946426Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
> 2020-10-28T22:58:39.4946965Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
> 2020-10-28T22:58:39.4947698Z  at 

[jira] [Updated] (FLINK-19865) YARN tests failed with "java.lang.NumberFormatException: For input string: "${env:MAX_LOG_FILE_NUMBER}"

2020-10-30 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19865:

Summary: YARN tests failed with "java.lang.NumberFormatException: For input 
string: "${env:MAX_LOG_FILE_NUMBER}"  (was: YARN tests failed with 
"java.lang.NumberFormatException: For input string: 
"${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For input string: 
"${env:MAX_LOG_FILE_NUMBER}"")

> YARN tests failed with "java.lang.NumberFormatException: For input string: 
> "${env:MAX_LOG_FILE_NUMBER}"
> ---
>
> Key: FLINK-19865
> URL: https://issues.apache.org/jira/browse/FLINK-19865
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354
> {code}
> 2020-10-28T22:58:39.4927767Z 2020-10-28 22:57:33,866 main ERROR Could not 
> create plugin of type class 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy for 
> element DefaultRolloverStrategy: java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For 
> input string: "${env:MAX_LOG_FILE_NUMBER}"
> 2020-10-28T22:58:39.4929252Z  at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> 2020-10-28T22:58:39.4929823Z  at java.lang.Integer.parseInt(Integer.java:569)
> 2020-10-28T22:58:39.4930327Z  at java.lang.Integer.parseInt(Integer.java:615)
> 2020-10-28T22:58:39.4931047Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:137)
> 2020-10-28T22:58:39.4931866Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:90)
> 2020-10-28T22:58:39.4932720Z  at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122)
> 2020-10-28T22:58:39.4933446Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1002)
> 2020-10-28T22:58:39.4934275Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:942)
> 2020-10-28T22:58:39.4935029Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4935837Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4936605Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:552)
> 2020-10-28T22:58:39.4937573Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:241)
> 2020-10-28T22:58:39.4938429Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:288)
> 2020-10-28T22:58:39.4939206Z  at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:579)
> 2020-10-28T22:58:39.4939885Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651)
> 2020-10-28T22:58:39.4940490Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668)
> 2020-10-28T22:58:39.4941087Z  at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253)
> 2020-10-28T22:58:39.4941733Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
> 2020-10-28T22:58:39.4942534Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
> 2020-10-28T22:58:39.4943154Z  at 
> org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
> 2020-10-28T22:58:39.4943820Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138)
> 2020-10-28T22:58:39.4944540Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45)
> 2020-10-28T22:58:39.4945199Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48)
> 2020-10-28T22:58:39.4945858Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30)
> 2020-10-28T22:58:39.4946426Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
> 2020-10-28T22:58:39.4946965Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
> 2020-10-28T22:58:39.4947698Z  at 
> 

[jira] [Commented] (FLINK-19843) ParquetFsStreamingSinkITCase.testPart failed with "Trying to access closed classloader"

2020-10-30 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223980#comment-17223980
 ] 

Dian Fu commented on FLINK-19843:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8698=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51

> ParquetFsStreamingSinkITCase.testPart failed with "Trying to access closed 
> classloader"
> ---
>
> Key: FLINK-19843
> URL: https://issues.apache.org/jira/browse/FLINK-19843
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8431=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51
> {code}
> 2020-10-27T22:51:46.7422561Z [ERROR] 
> testPart(org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase) Time 
> elapsed: 7.031 s <<< ERROR! 2020-10-27T22:51:46.7423062Z 
> java.lang.RuntimeException: Failed to fetch next result 
> 2020-10-27T22:51:46.7425294Z at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
>  2020-10-27T22:51:46.7426708Z at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:77)
>  2020-10-27T22:51:46.7427791Z at 
> org.apache.flink.table.planner.sinks.SelectTableSinkBase$RowIteratorWrapper.hasNext(SelectTableSinkBase.java:115)
>  2020-10-27T22:51:46.7428869Z at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:355)
>  2020-10-27T22:51:46.7429957Z at 
> java.util.Iterator.forEachRemaining(Iterator.java:115) 
> 2020-10-27T22:51:46.7430652Z at 
> org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:114) 
> 2020-10-27T22:51:46.7431826Z at 
> org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.check(FsStreamingSinkITCaseBase.scala:141)
>  2020-10-27T22:51:46.7432859Z at 
> org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.test(FsStreamingSinkITCaseBase.scala:122)
>  2020-10-27T22:51:46.7433902Z at 
> org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.testPart(FsStreamingSinkITCaseBase.scala:86)
>  2020-10-27T22:51:46.7434702Z at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> 2020-10-27T22:51:46.7435452Z at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> 2020-10-27T22:51:46.7436661Z at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  2020-10-27T22:51:46.7437367Z at 
> java.lang.reflect.Method.invoke(Method.java:498) 2020-10-27T22:51:46.7438119Z 
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  2020-10-27T22:51:46.7438966Z at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  2020-10-27T22:51:46.7439789Z at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  2020-10-27T22:51:46.7440666Z at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  2020-10-27T22:51:46.7441740Z at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> 2020-10-27T22:51:46.7442533Z at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> 2020-10-27T22:51:46.7443290Z at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  2020-10-27T22:51:46.7444227Z at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  2020-10-27T22:51:46.7445043Z at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> 2020-10-27T22:51:46.7445631Z at java.lang.Thread.run(Thread.java:748) 
> 2020-10-27T22:51:46.7446383Z Caused by: java.io.IOException: Failed to fetch 
> job execution result 2020-10-27T22:51:46.7447239Z at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:175)
>  2020-10-27T22:51:46.7448233Z at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:126)
>  2020-10-27T22:51:46.7449239Z at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:103)
>  2020-10-27T22:51:46.7449963Z ... 22 more 2020-10-27T22:51:46.7450619Z Caused 
> by: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobExecutionException: Job 

[jira] [Commented] (FLINK-19645) ShuffleCompressionITCase.testDataCompressionForBlockingShuffle is instable

2020-10-30 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223978#comment-17223978
 ] 

Dian Fu commented on FLINK-19645:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8698=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56

> ShuffleCompressionITCase.testDataCompressionForBlockingShuffle is instable
> --
>
> Key: FLINK-19645
> URL: https://issues.apache.org/jira/browse/FLINK-19645
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Yingjie Cao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7631=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56
> {code}
> 2020-10-14T21:46:25.3133019Z [ERROR] 
> testDataCompressionForBlockingShuffle[useBroadcastPartitioner = 
> true](org.apache.flink.test.runtime.ShuffleCompressionITCase)  Time elapsed: 
> 34.949 s  <<< ERROR!
> 2020-10-14T21:46:25.3133688Z org.apache.flink.util.FlinkException: Could not 
> close resource.
> 2020-10-14T21:46:25.3134146Z  at 
> org.apache.flink.util.AutoCloseableAsync.close(AutoCloseableAsync.java:42)
> 2020-10-14T21:46:25.3134716Z  at 
> org.apache.flink.test.runtime.ShuffleCompressionITCase.executeTest(ShuffleCompressionITCase.java:114)
> 2020-10-14T21:46:25.3135396Z  at 
> org.apache.flink.test.runtime.ShuffleCompressionITCase.testDataCompressionForBlockingShuffle(ShuffleCompressionITCase.java:89)
> 2020-10-14T21:46:25.3135941Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-10-14T21:46:25.3136410Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-10-14T21:46:25.3136959Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-10-14T21:46:25.3137435Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-10-14T21:46:25.3137915Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-10-14T21:46:25.3138459Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-10-14T21:46:25.3138987Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-10-14T21:46:25.3139524Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-10-14T21:46:25.3140017Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-10-14T21:46:25.3140495Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-10-14T21:46:25.3141037Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-10-14T21:46:25.3141520Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-10-14T21:46:25.3142213Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-10-14T21:46:25.3142672Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-10-14T21:46:25.3143134Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-10-14T21:46:25.3143578Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-10-14T21:46:25.3144026Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-10-14T21:46:25.3144437Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-10-14T21:46:25.3144815Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-10-14T21:46:25.3147907Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-10-14T21:46:25.3148415Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-10-14T21:46:25.3149058Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-10-14T21:46:25.3149526Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-10-14T21:46:25.3149993Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-10-14T21:46:25.3150429Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-10-14T21:46:25.3150918Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-10-14T21:46:25.3151469Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-10-14T21:46:25.3152045Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-10-14T21:46:25.3152599Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-10-14T21:46:25.3153163Z  at 

[jira] [Commented] (FLINK-19838) SQLClientKafkaITCase times out

2020-10-30 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223976#comment-17223976
 ] 

Dian Fu commented on FLINK-19838:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8698=logs=739e6eac-8312-5d31-d437-294c4d26fced=a68b8d89-50e9-5977-4500-f4fde4f57f9b

> SQLClientKafkaITCase times out
> --
>
> Key: FLINK-19838
> URL: https://issues.apache.org/jira/browse/FLINK-19838
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8359=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529
> {code}
> 2020-10-27T11:00:34.6543449Z Oct 27 11:00:34 [INFO] 
> ---
> 2020-10-27T11:00:34.6543850Z Oct 27 11:00:34 [INFO]  T E S T S
> 2020-10-27T11:00:34.6544483Z Oct 27 11:00:34 [INFO] 
> ---
> 2020-10-27T11:00:36.1035790Z Oct 27 11:00:36 [INFO] Running 
> org.apache.flink.tests.util.kafka.SQLClientKafkaITCase
> 2020-10-27T11:03:55.5041502Z Oct 27 11:03:55 Test (pid: 19341) did not finish 
> after 600 seconds.
> 2020-10-27T11:03:55.5046874Z Oct 27 11:03:55 Printing Flink logs and killing 
> it:
> 2020-10-27T11:03:55.5062734Z cat: 
> '/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
>  No such file or directory
> 2020-10-27T11:03:55.5068282Z 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 826: 
> kill: (19341) - No such process
> 2020-10-27T11:04:44.9331324Z Oct 27 11:04:44 Test (pid: 22633) did not finish 
> after 600 seconds.
> 2020-10-27T11:04:44.9332932Z Oct 27 11:04:44 Printing Flink logs and killing 
> it:
> 2020-10-27T11:04:44.9344128Z cat: 
> '/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
>  No such file or directory
> 2020-10-27T11:04:44.9353416Z 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 826: 
> kill: (22633) - No such process
> 2020-10-27T11:05:20.1751591Z Oct 27 11:05:20 Test (pid: 24859) did not finish 
> after 600 seconds.
> 2020-10-27T11:05:20.1752346Z Oct 27 11:05:20 Printing Flink logs and killing 
> it:
> 2020-10-27T11:05:20.1764397Z cat: 
> '/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
>  No such file or directory
> 2020-10-27T11:05:20.1772610Z 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 826: 
> kill: (24859) - No such process
> 2020-10-27T11:05:55.9534099Z Oct 27 11:05:55 Test (pid: 26969) did not finish 
> after 600 seconds.
> 2020-10-27T11:05:55.9535172Z Oct 27 11:05:55 Printing Flink logs and killing 
> it:
> 2020-10-27T11:05:55.9550896Z cat: 
> '/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
>  No such file or directory
> 2020-10-27T11:05:55.9555742Z 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 826: 
> kill: (26969) - No such process
> 2020-10-27T11:06:32.3055198Z Oct 27 11:06:32 Test (pid: 28817) did not finish 
> after 600 seconds.
> 2020-10-27T11:06:32.3057771Z Oct 27 11:06:32 Printing Flink logs and killing 
> it:
> 2020-10-27T11:06:32.3073622Z cat: 
> '/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
>  No such file or directory
> 2020-10-27T11:06:32.3074975Z 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 826: 
> kill: (28817) - No such process
> 2020-10-27T11:07:07.8402708Z Oct 27 11:07:07 Test (pid: 30667) did not finish 
> after 600 seconds.
> 2020-10-27T11:07:07.8404207Z Oct 27 11:07:07 Printing Flink logs and killing 
> it:
> 2020-10-27T11:07:07.8414052Z cat: 
> '/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
>  No such file or directory
> 2020-10-27T11:07:07.8417576Z 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 826: 
> kill: (30667) - No such process
> 2020-10-27T11:07:52.0577148Z 
> ==
> 2020-10-27T11:07:52.0578339Z === WARNING: This E2E Run took already 80% of 
> the allocated time budget of 250 minutes ===
> 2020-10-27T11:07:52.0579237Z 
> ==
> 2020-10-27T11:46:52.0516337Z 
> ==
> 2020-10-27T11:46:52.0517591Z === WARNING: This E2E Run will time out in the 
> next few 

[jira] [Commented] (FLINK-17424) SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to download error

2020-10-30 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223970#comment-17223970
 ] 

Dian Fu commented on FLINK-17424:
-

"SQL Client end-to-end test (Blink planner) Elasticsearch (v6.3.1)" failed with 
similar error:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8660=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529

> SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to 
> download error
> 
>
> Key: FLINK-17424
> URL: https://issues.apache.org/jira/browse/FLINK-17424
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Yu Li
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> `SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1)` failed in 
> release-1.10 crone job with below error:
> {noformat}
> Preparing Elasticsearch(version=7)...
> Downloading Elasticsearch from 
> https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.5.1-linux-x86_64.tar.gz
>  ...
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
>   4  276M4 13.3M0 0  28.8M  0  0:00:09 --:--:--  0:00:09 28.8M
>  42  276M   42  117M0 0  80.7M  0  0:00:03  0:00:01  0:00:02 80.7M
>  70  276M   70  196M0 0  79.9M  0  0:00:03  0:00:02  0:00:01 79.9M
>  89  276M   89  248M0 0  82.3M  0  0:00:03  0:00:03 --:--:-- 82.4M
> curl: (56) GnuTLS recv error (-54): Error in the pull function.
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
> 0curl: (7) Failed to connect to localhost port 9200: Connection refused
> [FAIL] Test script contains errors.
> {noformat}
> https://api.travis-ci.org/v3/job/680222168/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19892) Replace __metaclass__ field with metaclass keyword

2020-10-30 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-19892.
---
Fix Version/s: 1.11.3
   1.12.0
   Resolution: Fixed

Merged to master via a7abe7d6a13bfca168828af912fd3bdb09faeb95 and release-1.11 
via 06561515d62deec1b552064b97d5c27495e14f16

> Replace __metaclass__ field with metaclass keyword
> --
>
> Key: FLINK-19892
> URL: https://issues.apache.org/jira/browse/FLINK-19892
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.3
>
>
> __metaclass__ is not compatible in some scenarios of python3 e.g.
>  
> {code:java}
> >>> from pyflink.serializers import Serializer
> >>> import inspect
> >>> inspect.isabstract(Serializer)
> False{code}
>  
> We can replace __metaclass__ field with 
> [metaclass|https://www.python.org/dev/peps/pep-3115/] keyword to solve these 
> problem. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-7589) org.apache.http.ConnectionClosedException: Premature end of Content-Length delimited message body (expected: 159764230; received: 64638536)

2020-10-30 Thread Yongjun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223957#comment-17223957
 ] 

Yongjun Zhang commented on FLINK-7589:
--

HI Folks, thanks for the work here, you may take a look at HADOOP-17338.

> org.apache.http.ConnectionClosedException: Premature end of Content-Length 
> delimited message body (expected: 159764230; received: 64638536)
> ---
>
> Key: FLINK-7589
> URL: https://issues.apache.org/jira/browse/FLINK-7589
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Bowen Li
>Priority: Blocker
> Fix For: 1.3.4, 1.4.0
>
>
> When I tried to resume a Flink job from a savepoint with different 
> parallelism, I ran into this error. And the resume failed.
> {code:java}
> 2017-09-05 21:53:57,317 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- returnsivs -> 
> Sink: xxx (7/12) (0da16ec908fc7b9b16a5c2cf1aa92947) switched from RUNNING to 
> FAILED.
> org.apache.http.ConnectionClosedException: Premature end of Content-Length 
> delimited message body (expected: 159764230; received: 64638536
>   at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:180)
>   at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)
>   at java.io.FilterInputStream.read(FilterInputStream.java:133)
>   at java.io.FilterInputStream.read(FilterInputStream.java:133)
>   at java.io.FilterInputStream.read(FilterInputStream.java:133)
>   at 
> com.amazonaws.util.ContentLengthValidationInputStream.read(ContentLengthValidationInputStream.java:77)
>   at java.io.FilterInputStream.read(FilterInputStream.java:133)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:160)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.flink.runtime.fs.hdfs.HadoopDataInputStream.read(HadoopDataInputStream.java:72)
>   at 
> org.apache.flink.core.fs.FSDataInputStreamWrapper.read(FSDataInputStreamWrapper.java:61)
>   at 
> org.apache.flink.runtime.util.NonClosingStreamDecorator.read(NonClosingStreamDecorator.java:47)
>   at java.io.DataInputStream.readFully(DataInputStream.java:195)
>   at java.io.DataInputStream.readLong(DataInputStream.java:416)
>   at 
> org.apache.flink.api.common.typeutils.base.LongSerializer.deserialize(LongSerializer.java:68)
>   at 
> org.apache.flink.streaming.api.operators.InternalTimer$TimerSerializer.deserialize(InternalTimer.java:156)
>   at 
> org.apache.flink.streaming.api.operators.HeapInternalTimerService.restoreTimersForKeyGroup(HeapInternalTimerService.java:345)
>   at 
> org.apache.flink.streaming.api.operators.InternalTimeServiceManager.restoreStateForKeyGroup(InternalTimeServiceManager.java:141)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:496)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:104)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:251)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:663)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19866) FunctionsStateBootstrapOperator.createStateAccessor fails due to uninitialized runtimeContext

2020-10-30 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman reassigned FLINK-19866:


Assignee: wang

> FunctionsStateBootstrapOperator.createStateAccessor fails due to 
> uninitialized runtimeContext
> -
>
> Key: FLINK-19866
> URL: https://issues.apache.org/jira/browse/FLINK-19866
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Affects Versions: statefun-2.2.0, 1.11.2
>Reporter: wang
>Assignee: wang
>Priority: Blocker
>  Labels: pull-request-available
>
> It has bugs similar to 
> [FLINK-19330|https://issues.apache.org/jira/browse/FLINK-19330]
> In Flink 1.11.2, statefun-flink-state-processor 2.2.0, the 
> AbstractStreamOperator's runtimeContext is not fully initialized when 
> executing
>  AbstractStreamOperator#intializeState()
> in particular KeyedStateStore is set after intializeState was finished.
> See:
> [https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/AbstractStreamOperator.java#L258,L259]
> This behaviour was changed from Flink 1.10->Flink 1.11.
> StateFun's FunctionsStateBootstrapOperator performs its initialization logic 
> at initalizeState, and it requires an already initialized runtimeContext to 
> create stateAccessor.
> This situation causes the following failure: 
> {code:java}
> Caused by: java.lang.NullPointerException: Keyed state can only be used on a 
> 'keyed stream', i.e., after a 'keyBy()' operation.Caused by: 
> java.lang.NullPointerException: Keyed state can only be used on a 'keyed 
> stream', i.e., after a 'keyBy()' operation. at 
> org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:75) at 
> org.apache.flink.streaming.api.operators.StreamingRuntimeContext.checkPreconditionsAndGetKeyedStateStore(StreamingRuntimeContext.java:223)
>  at 
> org.apache.flink.streaming.api.operators.StreamingRuntimeContext.getState(StreamingRuntimeContext.java:188)
>  at 
> org.apache.flink.statefun.flink.core.state.FlinkState.createFlinkStateAccessor(FlinkState.java:69)
>  at 
> org.apache.flink.statefun.flink.core.state.FlinkStateBinder.bindValue(FlinkStateBinder.java:48)
>  at org.apache.flink.statefun.sdk.state.StateBinder.bind(StateBinder.java:30) 
> at 
> org.apache.flink.statefun.flink.core.state.PersistedStates.findReflectivelyAndBind(PersistedStates.java:46)
>  at 
> org.apache.flink.statefun.flink.state.processor.operator.StateBootstrapFunctionRegistry.bindState(StateBootstrapFunctionRegistry.java:120)
>  at 
> org.apache.flink.statefun.flink.state.processor.operator.StateBootstrapFunctionRegistry.initialize(StateBootstrapFunctionRegistry.java:103)
>  at 
> org.apache.flink.statefun.flink.state.processor.operator.StateBootstrapper.(StateBootstrapper.java:39)
>  at 
> org.apache.flink.statefun.flink.state.processor.operator.FunctionsStateBootstrapOperator.initializeState(FunctionsStateBootstrapOperator.java:67)
>  at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:106)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:258)
>  at 
> org.apache.flink.state.api.output.BoundedStreamTask.init(BoundedStreamTask.java:85)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:457)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:522)
>  at 
> org.apache.flink.state.api.output.BoundedOneInputStreamTaskRunner.mapPartition(BoundedOneInputStreamTaskRunner.java:76)
>  at 
> org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103)
>  at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721) at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:546) at 
> java.lang.Thread.run(Thread.java:748){code}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19701) Unaligned Checkpoint might misuse the number of buffers to persist from the previous barrier

2020-10-30 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223767#comment-17223767
 ] 

Roman Khachatryan edited comment on FLINK-19701 at 10/30/20, 7:44 PM:
--

-Another related issue occurs in the following scenario:- - not an issue (there 
is a check in getInflightBuffers)
 # -taskA and taskB receive AlignedCheckpoint barrier-
 # -taskB cancels it-
 # -JM sends (next) UnalignedCheckpoint barrier-
 # -now taskA will have AC barrier in queue AND will receive UC barrier - so 
RemoteInputChannel.receivedBuffers.getNumUnprioritizedElements() will include 
this barrier, which is currently illegal (ChannelStateWriter will throw an 
exception)-

-Probably, it's easier to solve it together. If not, please create a separate 
ticket.-


was (Author: roman_khachatryan):
-Another related issue occurs in the following scenario:- - not an issue (there 
is a check in getInflightBuffers)
 # taskA and taskB receive AlignedCheckpoint barrier
 # taskB cancels it
 # JM sends (next) UnalignedCheckpoint barrier
 # now taskA will have AC barrier in queue AND will receive UC barrier - so 
RemoteInputChannel.receivedBuffers.getNumUnprioritizedElements() will include 
this barrier, which is currently illegal (ChannelStateWriter will throw an 
exception)

Probably, it's easier to solve it together. If not, please create a separate 
ticket.

> Unaligned Checkpoint might misuse the number of buffers to persist from the 
> previous barrier
> 
>
> Key: FLINK-19701
> URL: https://issues.apache.org/jira/browse/FLINK-19701
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Yun Gao
>Priority: Major
>
> Current CheckpointUnaligner interacts with RemoteInputChannel to persisting 
> the input buffers. However, based the current implementation it seems if we 
> have the following case:
> {code:java}
> 1. There are 3 input channels.
> 2. Input channel 0 received barrier 1, and processed barrier 1 to start 
> checkpoint 1.
> 3. Input channel 1 received barrier 1, and processed barrier 1. Now the state 
> of input channel persister becomes BARRIER_RECEIVED and 
> numBuffersOvertaken(channel 1) = n_1.
> 4. However, input 2 received nothing and the checkpoint expired, new 
> checkpoint is trigger.
> 5. Input channel 0 received barrier 2, checkpoint 1 is deserted and 
> checkpoint 2 is started. However, in this case the state of the input 
> channels are not cleared. Thus now channel 1 is still BARRIER_RECEIVED and 
> numBuffersOvertaken(channel 1) = n_1. Then channel 1 would only persist n_1 
> buffers in the channel for the new checkpoint 2. 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19701) Unaligned Checkpoint might misuse the number of buffers to persist from the previous barrier

2020-10-30 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223767#comment-17223767
 ] 

Roman Khachatryan edited comment on FLINK-19701 at 10/30/20, 7:43 PM:
--

-Another related issue occurs in the following scenario:- - not an issue (there 
is a check in getInflightBuffers)
 # taskA and taskB receive AlignedCheckpoint barrier
 # taskB cancels it
 # JM sends (next) UnalignedCheckpoint barrier
 # now taskA will have AC barrier in queue AND will receive UC barrier - so 
RemoteInputChannel.receivedBuffers.getNumUnprioritizedElements() will include 
this barrier, which is currently illegal (ChannelStateWriter will throw an 
exception)

Probably, it's easier to solve it together. If not, please create a separate 
ticket.


was (Author: roman_khachatryan):
Another related issue occurs in the following scenario:
 # taskA and taskB receive AlignedCheckpoint barrier
 # taskB cancels it
 # JM sends (next) UnalignedCheckpoint barrier
 # now taskA will have AC barrier in queue AND will receive UC barrier - so 
RemoteInputChannel.receivedBuffers.getNumUnprioritizedElements() will include 
this barrier, which is currently illegal (ChannelStateWriter will throw an 
exception)

Probably, it's easier to solve it together. If not, please create a separate 
ticket.

> Unaligned Checkpoint might misuse the number of buffers to persist from the 
> previous barrier
> 
>
> Key: FLINK-19701
> URL: https://issues.apache.org/jira/browse/FLINK-19701
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Yun Gao
>Priority: Major
>
> Current CheckpointUnaligner interacts with RemoteInputChannel to persisting 
> the input buffers. However, based the current implementation it seems if we 
> have the following case:
> {code:java}
> 1. There are 3 input channels.
> 2. Input channel 0 received barrier 1, and processed barrier 1 to start 
> checkpoint 1.
> 3. Input channel 1 received barrier 1, and processed barrier 1. Now the state 
> of input channel persister becomes BARRIER_RECEIVED and 
> numBuffersOvertaken(channel 1) = n_1.
> 4. However, input 2 received nothing and the checkpoint expired, new 
> checkpoint is trigger.
> 5. Input channel 0 received barrier 2, checkpoint 1 is deserted and 
> checkpoint 2 is started. However, in this case the state of the input 
> channels are not cleared. Thus now channel 1 is still BARRIER_RECEIVED and 
> numBuffersOvertaken(channel 1) = n_1. Then channel 1 would only persist n_1 
> buffers in the channel for the new checkpoint 2. 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19701) Unaligned Checkpoint might misuse the number of buffers to persist from the previous barrier

2020-10-30 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223767#comment-17223767
 ] 

Roman Khachatryan commented on FLINK-19701:
---

Another related issue occurs in the following scenario:
 # taskA and taskB receive AlignedCheckpoint barrier
 # taskB cancels it
 # JM sends (next) UnalignedCheckpoint barrier
 # now taskA will have AC barrier in queue AND will receive UC barrier - so 
RemoteInputChannel.receivedBuffers.getNumUnprioritizedElements() will include 
this barrier, which is currently illegal (ChannelStateWriter will throw an 
exception)

Probably, it's easier to solve it together. If not, please create a separate 
ticket.

> Unaligned Checkpoint might misuse the number of buffers to persist from the 
> previous barrier
> 
>
> Key: FLINK-19701
> URL: https://issues.apache.org/jira/browse/FLINK-19701
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Yun Gao
>Priority: Major
>
> Current CheckpointUnaligner interacts with RemoteInputChannel to persisting 
> the input buffers. However, based the current implementation it seems if we 
> have the following case:
> {code:java}
> 1. There are 3 input channels.
> 2. Input channel 0 received barrier 1, and processed barrier 1 to start 
> checkpoint 1.
> 3. Input channel 1 received barrier 1, and processed barrier 1. Now the state 
> of input channel persister becomes BARRIER_RECEIVED and 
> numBuffersOvertaken(channel 1) = n_1.
> 4. However, input 2 received nothing and the checkpoint expired, new 
> checkpoint is trigger.
> 5. Input channel 0 received barrier 2, checkpoint 1 is deserted and 
> checkpoint 2 is started. However, in this case the state of the input 
> channels are not cleared. Thus now channel 1 is still BARRIER_RECEIVED and 
> numBuffersOvertaken(channel 1) = n_1. Then channel 1 would only persist n_1 
> buffers in the channel for the new checkpoint 2. 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19897) Improve UI related to FLIP-102

2020-10-30 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias reassigned FLINK-19897:


Assignee: Yadong Xie

> Improve UI related to FLIP-102
> --
>
> Key: FLINK-19897
> URL: https://issues.apache.org/jira/browse/FLINK-19897
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Reporter: Matthias
>Assignee: Yadong Xie
>Priority: Major
> Fix For: 1.12.0
>
>
> This ticket collects issues that came up after merging FLIP-102 related 
> changes into master. The following issues should be fixed.
> * Add Tooltip to Heap metrics cell pointing out that the max metrics might 
> differ from the configured maximum value. This tooltip could be made optional 
> and only appears if heap max is different from the configured value. Here's a 
> proposal for the tooltip text: {{The maximum heap displayed might differ from 
> the configured values depending on the used GC algorithm for this process.}}
> * Rename "Network Memory Segments" into "Netty Shuffle Buffers"
> * Rename "Network Garbage Collection" into "Garbage Collection"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19904) Update documentation to address difference between -Xmx and the metric for maximum heap

2020-10-30 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias reassigned FLINK-19904:


Assignee: Matthias

> Update documentation to address difference between -Xmx and the metric for 
> maximum heap
> ---
>
> Key: FLINK-19904
> URL: https://issues.apache.org/jira/browse/FLINK-19904
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / Metrics
>Affects Versions: 1.10.2, 1.11.2
>Reporter: Matthias
>Assignee: Matthias
>Priority: Minor
> Fix For: 1.12.0
>
>
> We observed a difference between the configured maximum heap and the maximum 
> heap returned by the metric system. This is caused by the used garbage 
> collection (see [this 
> blogpost|https://plumbr.io/blog/memory-leaks/less-memory-than-xmx] for 
> further details).
> We should make the user aware of this in the documentation mentioning it in 
> the memory configuration and the metrics page.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19904) Update documentation to address difference between -Xmx and the metric for maximum heap

2020-10-30 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-19904:
-
Description: 
We observed a difference between the configured maximum heap and the maximum 
heap returned by the metric system. This is caused by the used garbage 
collection (see [this 
blogpost|https://plumbr.io/blog/memory-leaks/less-memory-than-xmx] for further 
details).

We should make the user aware of this in the documentation mentioning it in the 
memory configuration and the metrics page.

  was:
We observed a difference between the configured maximum heap and the maximum 
heap returned by the metric system. This is caused by the used garbage 
collection (see [this 
blogpost|https://plumbr.io/blog/memory-leaks/less-memory-than-xmx] for further 
details.

We should make the user aware of this in the documentation mentioning it in the 
memory configuration and the metrics page.


> Update documentation to address difference between -Xmx and the metric for 
> maximum heap
> ---
>
> Key: FLINK-19904
> URL: https://issues.apache.org/jira/browse/FLINK-19904
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / Metrics
>Affects Versions: 1.10.2, 1.11.2
>Reporter: Matthias
>Priority: Minor
> Fix For: 1.12.0
>
>
> We observed a difference between the configured maximum heap and the maximum 
> heap returned by the metric system. This is caused by the used garbage 
> collection (see [this 
> blogpost|https://plumbr.io/blog/memory-leaks/less-memory-than-xmx] for 
> further details).
> We should make the user aware of this in the documentation mentioning it in 
> the memory configuration and the metrics page.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17159) ES6 ElasticsearchSinkITCase unstable

2020-10-30 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223730#comment-17223730
 ] 

Till Rohrmann edited comment on FLINK-17159 at 10/30/20, 4:24 PM:
--

Another instability where the ES6 server did not start up: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8654=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20

{code}
[ERROR] 
testInvalidElasticsearchCluster(org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSinkITCase)
  Time elapsed: 0.019 s  <<< ERROR!
ElasticsearchStatusException[method [HEAD], host [http://127.0.0.1:9200], URI 
[/], status line [HTTP/1.1 503 Service Unavailable]]; nested: 
ResponseException[method [HEAD], host [
at 
org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:625)
at 
org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:535)
at 
org.elasticsearch.client.RestHighLevelClient.ping(RestHighLevelClient.java:275)
at 
org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSinkITCase.ensureClusterIsUp(ElasticsearchSinkITCase.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.elasticsearch.client.ResponseException: method [HEAD], host 
[http://127.0.0.1:9200], URI [/], status line [HTTP/1.1 503 Service Unavailable]
at 
org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:705)
at 
org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
at 
org.elasticsearch.client.RestClient.performRequest(RestClient.java:198)
at 
org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:522)
... 33 more
Caused by: org.elasticsearch.client.ResponseException: method [HEAD], host 
[http://127.0.0.1:9200], URI [/], status line [HTTP/1.1 503 Service Unavailable]
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:377)
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:366)
at 
org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:119)
at 
org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:177)
  

[jira] [Comment Edited] (FLINK-17159) ES6 ElasticsearchSinkITCase unstable

2020-10-30 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223730#comment-17223730
 ] 

Till Rohrmann edited comment on FLINK-17159 at 10/30/20, 4:22 PM:
--

Another instability where the ES6 server did not start up: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8654=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20

{code}

at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.elasticsearch.client.ResponseException: method [HEAD], host 
[http://127.0.0.1:9200], URI [/], status line [HTTP/1.1 503 Service Unavailable]
at 
org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:705)
at 
org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
at 
org.elasticsearch.client.RestClient.performRequest(RestClient.java:198)
at 
org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:522)
... 33 more
Caused by: org.elasticsearch.client.ResponseException: method [HEAD], host 
[http://127.0.0.1:9200], URI [/], status line [HTTP/1.1 503 Service Unavailable]
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:377)
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:366)
at 
org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:119)
at 
org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:177)
at 
org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436)
at 
org.apache.http.nio.protocol.HttpAsyncRequestExecutor.responseReceived(HttpAsyncRequestExecutor.java:309)
at 
org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:255)
at 
org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
at 
org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
at 
org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
at 
org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at 
org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at 
org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588)
at 

[jira] [Commented] (FLINK-17159) ES6 ElasticsearchSinkITCase unstable

2020-10-30 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223730#comment-17223730
 ] 

Till Rohrmann commented on FLINK-17159:
---

Another instability where the ES6 server did not start up: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8654=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20

{code}
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.elasticsearch.client.ResponseException: method [HEAD], host 
[http://127.0.0.1:9200], URI [/], status line [HTTP/1.1 503 Service Unavailable]
at 
org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:705)
at 
org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
at 
org.elasticsearch.client.RestClient.performRequest(RestClient.java:198)
at 
org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:522)
... 33 more
Caused by: org.elasticsearch.client.ResponseException: method [HEAD], host 
[http://127.0.0.1:9200], URI [/], status line [HTTP/1.1 503 Service Unavailable]
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:377)
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:366)
at 
org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:119)
at 
org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:177)
at 
org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436)
at 
org.apache.http.nio.protocol.HttpAsyncRequestExecutor.responseReceived(HttpAsyncRequestExecutor.java:309)
at 
org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:255)
at 
org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
at 
org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
at 
org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
at 
org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at 
org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at 
org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588)
at java.lang.Thread.run(Thread.java:748)
{code}

> ES6 ElasticsearchSinkITCase unstable
> 
>
> Key: FLINK-17159
> URL: https://issues.apache.org/jira/browse/FLINK-17159
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0, 1.11.3
>
>
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7482=logs=64110e28-73be-50d7-9369-8750330e0bf1=aa84fb9a-59ae-5696-70f7-011bc086e59b]
> {code:java}
> 2020-04-15T02:37:04.4289477Z [ERROR] 
> testElasticsearchSinkWithSmile(org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSinkITCase)
>   Time elapsed: 0.145 s  <<< ERROR!
> 2020-04-15T02:37:04.4290310Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-04-15T02:37:04.4290790Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-04-15T02:37:04.4291404Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:659)
> 2020-04-15T02:37:04.4291956Z  at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:77)
> 2020-04-15T02:37:04.4292548Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1643)
> 

[jira] [Commented] (FLINK-19882) E2E: SQLClientHBaseITCase crash

2020-10-30 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223725#comment-17223725
 ] 

Till Rohrmann commented on FLINK-19882:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8654=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529

A different stack trace:

{code}
Oct 30 10:44:40 Test (pid: 31744) did not finish after 600 seconds.
Oct 30 10:44:40 Printing Flink logs and killing it:
cat: 
'/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
 No such file or directory
/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 826: 
kill: (31744) - No such process
Oct 30 10:45:35 Test (pid: 2607) did not finish after 600 seconds.
Oct 30 10:45:35 Printing Flink logs and killing it:
cat: 
'/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
 No such file or directory
/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 826: 
kill: (2607) - No such process
Oct 30 10:46:10 Test (pid: 4798) did not finish after 600 seconds.
Oct 30 10:46:10 Printing Flink logs and killing it:
cat: 
'/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
 No such file or directory
/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 826: 
kill: (4798) - No such process
Oct 30 10:46:46 Test (pid: 6854) did not finish after 600 seconds.
Oct 30 10:46:46 Printing Flink logs and killing it:
cat: 
'/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
 No such file or directory
/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 826: 
kill: (6854) - No such process
Oct 30 10:47:24 Test (pid: 8646) did not finish after 600 seconds.
Oct 30 10:47:24 Printing Flink logs and killing it:
cat: 
'/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
 No such file or directory
Oct 30 10:47:24 Searching for .dump, .dumpstream and related files in 
'/home/vsts/work/1/s'
Oct 30 10:47:34 Moving 
'/home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire-reports/2020-10-30T10-38-49_296-jvmRun2.dumpstream'
 to target directory ('/home/vsts/work/1/s/flink-end-to-end-tests/artifacts')
Oct 30 10:47:34 Moving 
'/home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire-reports/2020-10-30T10-38-49_296-jvmRun2.dump'
 to target directory ('/home/vsts/work/1/s/flink-end-to-end-tests/artifacts')
Oct 30 10:47:34 COMPRESSING build artifacts.
Oct 30 10:47:34 ./
Oct 30 10:47:34 
./-home-vsts-work-1-s-flink-end-to-end-tests-flink-end-to-end-tests-hbase-target-surefire-reports-2020-10-30T10-38-49_296-jvmRun2.dumpstream
Oct 30 10:47:34 ./mvn-1.log
Oct 30 10:47:35 
./-home-vsts-work-1-s-flink-end-to-end-tests-flink-end-to-end-tests-hbase-target-surefire-reports-2020-10-30T10-38-49_296-jvmRun2.dump
Oct 30 10:47:35 ./mvn-2.log
Oct 30 10:47:35 Stopping taskexecutor daemon (pid: 14341) on host fv-az679-409.
Oct 30 10:47:35 Stopping standalonesession daemon (pid: 13466) on host 
fv-az679-409.
rm: cannot remove 
'/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-59903502934/tmp/backup':
 No such file or directory
Oct 30 10:48:00 Test (pid: 10434) did not finish after 600 seconds.
Oct 30 10:48:00 Printing Flink logs and killing it:
cat: 
'/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
 No such file or directory
/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 826: 
kill: (10434) - No such process
The STDIO streams did not close within 10 seconds of the exit event from 
process '/bin/bash'. This may indicate a child process inherited the STDIO 
streams and has not yet exited.
{code}

> E2E: SQLClientHBaseITCase crash
> ---
>
> Key: FLINK-19882
> URL: https://issues.apache.org/jira/browse/FLINK-19882
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / HBase
>Reporter: Jingsong Lee
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> INSTANCE: 
> [https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/8563/logs/141]
> {code:java}
> 2020-10-29T09:43:24.0088180Z [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
> on project flink-end-to-end-tests-hbase: There are test failures.
> 2020-10-29T09:43:24.0088792Z [ERROR] 
> 2020-10-29T09:43:24.0089518Z [ERROR] Please refer to 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire-reports
>  for the individual test results.
> 2020-10-29T09:43:24.0090427Z [ERROR] Please refer to dump 

[jira] [Updated] (FLINK-19882) E2E: SQLClientHBaseITCase crash

2020-10-30 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-19882:
--
Fix Version/s: 1.12.0

> E2E: SQLClientHBaseITCase crash
> ---
>
> Key: FLINK-19882
> URL: https://issues.apache.org/jira/browse/FLINK-19882
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / HBase
>Reporter: Jingsong Lee
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> INSTANCE: 
> [https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/8563/logs/141]
> {code:java}
> 2020-10-29T09:43:24.0088180Z [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
> on project flink-end-to-end-tests-hbase: There are test failures.
> 2020-10-29T09:43:24.0088792Z [ERROR] 
> 2020-10-29T09:43:24.0089518Z [ERROR] Please refer to 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire-reports
>  for the individual test results.
> 2020-10-29T09:43:24.0090427Z [ERROR] Please refer to dump files (if any 
> exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2020-10-29T09:43:24.0090914Z [ERROR] The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> 2020-10-29T09:43:24.0093105Z [ERROR] Command was /bin/sh -c cd 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target
>  && /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire/surefirebooter6795869883612750001.jar
>  
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire
>  2020-10-29T09-34-47_778-jvmRun2 surefire2269050977160717631tmp 
> surefire_67897497331523564186tmp
> 2020-10-29T09:43:24.0094488Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-10-29T09:43:24.0094797Z [ERROR] Process Exit Code: 143
> 2020-10-29T09:43:24.0095033Z [ERROR] Crashed tests:
> 2020-10-29T09:43:24.0095321Z [ERROR] 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-10-29T09:43:24.0095828Z [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM 
> terminated without properly saying goodbye. VM crash or System.exit called?
> 2020-10-29T09:43:24.0097838Z [ERROR] Command was /bin/sh -c cd 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target
>  && /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire/surefirebooter6795869883612750001.jar
>  
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire
>  2020-10-29T09-34-47_778-jvmRun2 surefire2269050977160717631tmp 
> surefire_67897497331523564186tmp
> 2020-10-29T09:43:24.0098966Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-10-29T09:43:24.0099266Z [ERROR] Process Exit Code: 143
> 2020-10-29T09:43:24.0099502Z [ERROR] Crashed tests:
> 2020-10-29T09:43:24.0099789Z [ERROR] 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-10-29T09:43:24.0100331Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:669)
> 2020-10-29T09:43:24.0100883Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:282)
> 2020-10-29T09:43:24.0101774Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:245)
> 2020-10-29T09:43:24.0102360Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 2020-10-29T09:43:24.0103004Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 2020-10-29T09:43:24.0103737Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> 2020-10-29T09:43:24.0104301Z [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> 2020-10-29T09:43:24.0104828Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> 2020-10-29T09:43:24.0105334Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> 2020-10-29T09:43:24.0105826Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> 2020-10-29T09:43:24.0106384Z [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)

[jira] [Updated] (FLINK-19882) E2E: SQLClientHBaseITCase crash

2020-10-30 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-19882:
--
Priority: Critical  (was: Major)

> E2E: SQLClientHBaseITCase crash
> ---
>
> Key: FLINK-19882
> URL: https://issues.apache.org/jira/browse/FLINK-19882
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / HBase
>Reporter: Jingsong Lee
>Priority: Critical
>  Labels: testability
>
> INSTANCE: 
> [https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/8563/logs/141]
> {code:java}
> 2020-10-29T09:43:24.0088180Z [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
> on project flink-end-to-end-tests-hbase: There are test failures.
> 2020-10-29T09:43:24.0088792Z [ERROR] 
> 2020-10-29T09:43:24.0089518Z [ERROR] Please refer to 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire-reports
>  for the individual test results.
> 2020-10-29T09:43:24.0090427Z [ERROR] Please refer to dump files (if any 
> exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2020-10-29T09:43:24.0090914Z [ERROR] The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> 2020-10-29T09:43:24.0093105Z [ERROR] Command was /bin/sh -c cd 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target
>  && /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire/surefirebooter6795869883612750001.jar
>  
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire
>  2020-10-29T09-34-47_778-jvmRun2 surefire2269050977160717631tmp 
> surefire_67897497331523564186tmp
> 2020-10-29T09:43:24.0094488Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-10-29T09:43:24.0094797Z [ERROR] Process Exit Code: 143
> 2020-10-29T09:43:24.0095033Z [ERROR] Crashed tests:
> 2020-10-29T09:43:24.0095321Z [ERROR] 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-10-29T09:43:24.0095828Z [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM 
> terminated without properly saying goodbye. VM crash or System.exit called?
> 2020-10-29T09:43:24.0097838Z [ERROR] Command was /bin/sh -c cd 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target
>  && /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire/surefirebooter6795869883612750001.jar
>  
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire
>  2020-10-29T09-34-47_778-jvmRun2 surefire2269050977160717631tmp 
> surefire_67897497331523564186tmp
> 2020-10-29T09:43:24.0098966Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-10-29T09:43:24.0099266Z [ERROR] Process Exit Code: 143
> 2020-10-29T09:43:24.0099502Z [ERROR] Crashed tests:
> 2020-10-29T09:43:24.0099789Z [ERROR] 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-10-29T09:43:24.0100331Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:669)
> 2020-10-29T09:43:24.0100883Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:282)
> 2020-10-29T09:43:24.0101774Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:245)
> 2020-10-29T09:43:24.0102360Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 2020-10-29T09:43:24.0103004Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 2020-10-29T09:43:24.0103737Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> 2020-10-29T09:43:24.0104301Z [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> 2020-10-29T09:43:24.0104828Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> 2020-10-29T09:43:24.0105334Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> 2020-10-29T09:43:24.0105826Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> 2020-10-29T09:43:24.0106384Z [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
> 

[jira] [Updated] (FLINK-19882) E2E: SQLClientHBaseITCase crash

2020-10-30 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-19882:
--
Labels: test-stability  (was: testability)

> E2E: SQLClientHBaseITCase crash
> ---
>
> Key: FLINK-19882
> URL: https://issues.apache.org/jira/browse/FLINK-19882
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / HBase
>Reporter: Jingsong Lee
>Priority: Critical
>  Labels: test-stability
>
> INSTANCE: 
> [https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/8563/logs/141]
> {code:java}
> 2020-10-29T09:43:24.0088180Z [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
> on project flink-end-to-end-tests-hbase: There are test failures.
> 2020-10-29T09:43:24.0088792Z [ERROR] 
> 2020-10-29T09:43:24.0089518Z [ERROR] Please refer to 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire-reports
>  for the individual test results.
> 2020-10-29T09:43:24.0090427Z [ERROR] Please refer to dump files (if any 
> exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2020-10-29T09:43:24.0090914Z [ERROR] The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> 2020-10-29T09:43:24.0093105Z [ERROR] Command was /bin/sh -c cd 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target
>  && /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire/surefirebooter6795869883612750001.jar
>  
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire
>  2020-10-29T09-34-47_778-jvmRun2 surefire2269050977160717631tmp 
> surefire_67897497331523564186tmp
> 2020-10-29T09:43:24.0094488Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-10-29T09:43:24.0094797Z [ERROR] Process Exit Code: 143
> 2020-10-29T09:43:24.0095033Z [ERROR] Crashed tests:
> 2020-10-29T09:43:24.0095321Z [ERROR] 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-10-29T09:43:24.0095828Z [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM 
> terminated without properly saying goodbye. VM crash or System.exit called?
> 2020-10-29T09:43:24.0097838Z [ERROR] Command was /bin/sh -c cd 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target
>  && /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire/surefirebooter6795869883612750001.jar
>  
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire
>  2020-10-29T09-34-47_778-jvmRun2 surefire2269050977160717631tmp 
> surefire_67897497331523564186tmp
> 2020-10-29T09:43:24.0098966Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-10-29T09:43:24.0099266Z [ERROR] Process Exit Code: 143
> 2020-10-29T09:43:24.0099502Z [ERROR] Crashed tests:
> 2020-10-29T09:43:24.0099789Z [ERROR] 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-10-29T09:43:24.0100331Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:669)
> 2020-10-29T09:43:24.0100883Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:282)
> 2020-10-29T09:43:24.0101774Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:245)
> 2020-10-29T09:43:24.0102360Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 2020-10-29T09:43:24.0103004Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 2020-10-29T09:43:24.0103737Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> 2020-10-29T09:43:24.0104301Z [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> 2020-10-29T09:43:24.0104828Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> 2020-10-29T09:43:24.0105334Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> 2020-10-29T09:43:24.0105826Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> 2020-10-29T09:43:24.0106384Z [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
> 

[jira] [Updated] (FLINK-18570) SQLClientHBaseITCase.testHBase fails on azure

2020-10-30 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-18570:
--
Fix Version/s: 1.12.0

> SQLClientHBaseITCase.testHBase fails on azure
> -
>
> Key: FLINK-18570
> URL: https://issues.apache.org/jira/browse/FLINK-18570
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: Dawid Wysakowicz
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4403=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529
> {code}
> 2020-07-10T17:06:32.1514539Z [INFO] Running 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-07-10T17:08:09.9141283Z [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 97.757 s <<< FAILURE! - in 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-07-10T17:08:09.9144691Z [ERROR] 
> testHBase(org.apache.flink.tests.util.hbase.SQLClientHBaseITCase)  Time 
> elapsed: 97.757 s  <<< ERROR!
> 2020-07-10T17:08:09.9145637Z java.io.IOException: 
> 2020-07-10T17:08:09.9146515Z Process execution failed due error. Error 
> output:Jul 10, 2020 5:07:32 PM org.jline.utils.Log logr
> 2020-07-10T17:08:09.9147152Z WARNING: Unable to create a system terminal, 
> creating a dumb terminal (enable debug logging for more information)
> 2020-07-10T17:08:09.9147776Z Exception in thread "main" 
> org.apache.flink.table.client.SqlClientException: Unexpected exception. This 
> is a bug. Please consider filing an issue.
> 2020-07-10T17:08:09.9148432Z  at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)
> 2020-07-10T17:08:09.9148828Z Caused by: java.lang.RuntimeException: Error 
> running SQL job.
> 2020-07-10T17:08:09.9149329Z  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeUpdateInternal$14(LocalExecutor.java:598)
> 2020-07-10T17:08:09.9149932Z  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:255)
> 2020-07-10T17:08:09.9150501Z  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeUpdateInternal(LocalExecutor.java:592)
> 2020-07-10T17:08:09.9151088Z  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeUpdate(LocalExecutor.java:515)
> 2020-07-10T17:08:09.9151577Z  at 
> org.apache.flink.table.client.cli.CliClient.callInsert(CliClient.java:599)
> 2020-07-10T17:08:09.9152044Z  at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:315)
> 2020-07-10T17:08:09.9152456Z  at 
> java.util.Optional.ifPresent(Optional.java:159)
> 2020-07-10T17:08:09.9152874Z  at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:212)
> 2020-07-10T17:08:09.9153312Z  at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:142)
> 2020-07-10T17:08:09.9153729Z  at 
> org.apache.flink.table.client.SqlClient.start(SqlClient.java:114)
> 2020-07-10T17:08:09.9154151Z  at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)
> 2020-07-10T17:08:09.9154685Z Caused by: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
> 2020-07-10T17:08:09.9155272Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2020-07-10T17:08:09.9156047Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-07-10T17:08:09.9156474Z  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeUpdateInternal$14(LocalExecutor.java:595)
> 2020-07-10T17:08:09.9156802Z  ... 10 more
> 2020-07-10T17:08:09.9157069Z Caused by: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
> 2020-07-10T17:08:09.9157508Z  at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$7(RestClusterClient.java:366)
> 2020-07-10T17:08:09.9157942Z  at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:884)
> 2020-07-10T17:08:09.9158349Z  at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:866)
> 2020-07-10T17:08:09.9158762Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2020-07-10T17:08:09.9159168Z  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
> 2020-07-10T17:08:09.9159614Z  at 
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:292)
> 2020-07-10T17:08:09.9160033Z  at 
> 

[jira] [Updated] (FLINK-19863) SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process failed due to timeout"

2020-10-30 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-19863:
--
Fix Version/s: 1.12.0

> SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process 
> failed due to timeout"
> ---
>
> Key: FLINK-19863
> URL: https://issues.apache.org/jira/browse/FLINK-19863
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=3425d8ba-5f03-540a-c64b-51b8481bf7d6
> {code}
> 00:50:02,589 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,106 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,741 [main] INFO  
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource [] - Backed up 
> logs to 
> /home/vsts/work/1/s/flink-end-to-end-tests/artifacts/flink-b3924665-1ac9-4309-8255-20f0dc94e7b9.
> 00:50:04,788 [main] INFO  
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource [] - Stopping 
> HBase Cluster
> 00:50:16,243 [main] ERROR 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase   [] - 
> 
> Test testHBase[0: 
> hbase-version:1.4.3](org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) 
> failed with:
> java.io.IOException: Process failed due to timeout.
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:130)
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
>   at 
> org.apache.flink.tests.util.flink.FlinkDistribution.submitSQLJob(FlinkDistribution.java:221)
>   at 
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource$StandaloneClusterController.submitSQLJob(LocalStandaloneFlinkResource.java:196)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.executeSqlStatements(SQLClientHBaseITCase.java:215)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:152)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19865) YARN tests failed with "java.lang.NumberFormatException: For input string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For input string: "${env:

2020-10-30 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223717#comment-17223717
 ] 

Till Rohrmann edited comment on FLINK-19865 at 10/30/20, 4:11 PM:
--

Could you take a look at the problem [~rmetzger]? It might be related to the 
rolling log file changes FLINK-8357.


was (Author: till.rohrmann):
Could you take a look at the problem [~rmetzger]? It might be related to the 
rolling log file changes.

> YARN tests failed with "java.lang.NumberFormatException: For input string: 
> "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}""
> 
>
> Key: FLINK-19865
> URL: https://issues.apache.org/jira/browse/FLINK-19865
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354
> {code}
> 2020-10-28T22:58:39.4927767Z 2020-10-28 22:57:33,866 main ERROR Could not 
> create plugin of type class 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy for 
> element DefaultRolloverStrategy: java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For 
> input string: "${env:MAX_LOG_FILE_NUMBER}"
> 2020-10-28T22:58:39.4929252Z  at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> 2020-10-28T22:58:39.4929823Z  at java.lang.Integer.parseInt(Integer.java:569)
> 2020-10-28T22:58:39.4930327Z  at java.lang.Integer.parseInt(Integer.java:615)
> 2020-10-28T22:58:39.4931047Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:137)
> 2020-10-28T22:58:39.4931866Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:90)
> 2020-10-28T22:58:39.4932720Z  at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122)
> 2020-10-28T22:58:39.4933446Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1002)
> 2020-10-28T22:58:39.4934275Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:942)
> 2020-10-28T22:58:39.4935029Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4935837Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4936605Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:552)
> 2020-10-28T22:58:39.4937573Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:241)
> 2020-10-28T22:58:39.4938429Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:288)
> 2020-10-28T22:58:39.4939206Z  at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:579)
> 2020-10-28T22:58:39.4939885Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651)
> 2020-10-28T22:58:39.4940490Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668)
> 2020-10-28T22:58:39.4941087Z  at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253)
> 2020-10-28T22:58:39.4941733Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
> 2020-10-28T22:58:39.4942534Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
> 2020-10-28T22:58:39.4943154Z  at 
> org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
> 2020-10-28T22:58:39.4943820Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138)
> 2020-10-28T22:58:39.4944540Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45)
> 2020-10-28T22:58:39.4945199Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48)
> 2020-10-28T22:58:39.4945858Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30)
> 2020-10-28T22:58:39.4946426Z  at 
> 

[jira] [Comment Edited] (FLINK-19865) YARN tests failed with "java.lang.NumberFormatException: For input string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For input string: "${env:

2020-10-30 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223717#comment-17223717
 ] 

Till Rohrmann edited comment on FLINK-19865 at 10/30/20, 4:11 PM:
--

Could you take a look at the problem [~rmetzger]? It might be related to the 
rolling log file changes.


was (Author: till.rohrmann):
Could you take a look at the problem [~rmetzger]?

> YARN tests failed with "java.lang.NumberFormatException: For input string: 
> "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}""
> 
>
> Key: FLINK-19865
> URL: https://issues.apache.org/jira/browse/FLINK-19865
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354
> {code}
> 2020-10-28T22:58:39.4927767Z 2020-10-28 22:57:33,866 main ERROR Could not 
> create plugin of type class 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy for 
> element DefaultRolloverStrategy: java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For 
> input string: "${env:MAX_LOG_FILE_NUMBER}"
> 2020-10-28T22:58:39.4929252Z  at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> 2020-10-28T22:58:39.4929823Z  at java.lang.Integer.parseInt(Integer.java:569)
> 2020-10-28T22:58:39.4930327Z  at java.lang.Integer.parseInt(Integer.java:615)
> 2020-10-28T22:58:39.4931047Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:137)
> 2020-10-28T22:58:39.4931866Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:90)
> 2020-10-28T22:58:39.4932720Z  at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122)
> 2020-10-28T22:58:39.4933446Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1002)
> 2020-10-28T22:58:39.4934275Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:942)
> 2020-10-28T22:58:39.4935029Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4935837Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4936605Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:552)
> 2020-10-28T22:58:39.4937573Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:241)
> 2020-10-28T22:58:39.4938429Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:288)
> 2020-10-28T22:58:39.4939206Z  at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:579)
> 2020-10-28T22:58:39.4939885Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651)
> 2020-10-28T22:58:39.4940490Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668)
> 2020-10-28T22:58:39.4941087Z  at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253)
> 2020-10-28T22:58:39.4941733Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
> 2020-10-28T22:58:39.4942534Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
> 2020-10-28T22:58:39.4943154Z  at 
> org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
> 2020-10-28T22:58:39.4943820Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138)
> 2020-10-28T22:58:39.4944540Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45)
> 2020-10-28T22:58:39.4945199Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48)
> 2020-10-28T22:58:39.4945858Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30)
> 2020-10-28T22:58:39.4946426Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
> 2020-10-28T22:58:39.4946965Z  at 
> 

[jira] [Updated] (FLINK-19865) YARN tests failed with "java.lang.NumberFormatException: For input string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For input string: "${env:MAX_LOG

2020-10-30 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-19865:
--
Fix Version/s: 1.12.0

> YARN tests failed with "java.lang.NumberFormatException: For input string: 
> "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}""
> 
>
> Key: FLINK-19865
> URL: https://issues.apache.org/jira/browse/FLINK-19865
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354
> {code}
> 2020-10-28T22:58:39.4927767Z 2020-10-28 22:57:33,866 main ERROR Could not 
> create plugin of type class 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy for 
> element DefaultRolloverStrategy: java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For 
> input string: "${env:MAX_LOG_FILE_NUMBER}"
> 2020-10-28T22:58:39.4929252Z  at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> 2020-10-28T22:58:39.4929823Z  at java.lang.Integer.parseInt(Integer.java:569)
> 2020-10-28T22:58:39.4930327Z  at java.lang.Integer.parseInt(Integer.java:615)
> 2020-10-28T22:58:39.4931047Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:137)
> 2020-10-28T22:58:39.4931866Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:90)
> 2020-10-28T22:58:39.4932720Z  at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122)
> 2020-10-28T22:58:39.4933446Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1002)
> 2020-10-28T22:58:39.4934275Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:942)
> 2020-10-28T22:58:39.4935029Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4935837Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4936605Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:552)
> 2020-10-28T22:58:39.4937573Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:241)
> 2020-10-28T22:58:39.4938429Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:288)
> 2020-10-28T22:58:39.4939206Z  at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:579)
> 2020-10-28T22:58:39.4939885Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651)
> 2020-10-28T22:58:39.4940490Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668)
> 2020-10-28T22:58:39.4941087Z  at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253)
> 2020-10-28T22:58:39.4941733Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
> 2020-10-28T22:58:39.4942534Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
> 2020-10-28T22:58:39.4943154Z  at 
> org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
> 2020-10-28T22:58:39.4943820Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138)
> 2020-10-28T22:58:39.4944540Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45)
> 2020-10-28T22:58:39.4945199Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48)
> 2020-10-28T22:58:39.4945858Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30)
> 2020-10-28T22:58:39.4946426Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
> 2020-10-28T22:58:39.4946965Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
> 2020-10-28T22:58:39.4947698Z  at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.(ClusterEntrypoint.java:108)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19865) YARN tests failed with "java.lang.NumberFormatException: For input string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For input string: "${env:MAX_L

2020-10-30 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223717#comment-17223717
 ] 

Till Rohrmann commented on FLINK-19865:
---

Could you take a look at the problem [~rmetzger]?

> YARN tests failed with "java.lang.NumberFormatException: For input string: 
> "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}""
> 
>
> Key: FLINK-19865
> URL: https://issues.apache.org/jira/browse/FLINK-19865
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354
> {code}
> 2020-10-28T22:58:39.4927767Z 2020-10-28 22:57:33,866 main ERROR Could not 
> create plugin of type class 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy for 
> element DefaultRolloverStrategy: java.lang.NumberFormatException: For input 
> string: "${env:MAX_LOG_FILE_NUMBER}" java.lang.NumberFormatException: For 
> input string: "${env:MAX_LOG_FILE_NUMBER}"
> 2020-10-28T22:58:39.4929252Z  at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> 2020-10-28T22:58:39.4929823Z  at java.lang.Integer.parseInt(Integer.java:569)
> 2020-10-28T22:58:39.4930327Z  at java.lang.Integer.parseInt(Integer.java:615)
> 2020-10-28T22:58:39.4931047Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:137)
> 2020-10-28T22:58:39.4931866Z  at 
> org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy$Builder.build(DefaultRolloverStrategy.java:90)
> 2020-10-28T22:58:39.4932720Z  at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122)
> 2020-10-28T22:58:39.4933446Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1002)
> 2020-10-28T22:58:39.4934275Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:942)
> 2020-10-28T22:58:39.4935029Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4935837Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
> 2020-10-28T22:58:39.4936605Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:552)
> 2020-10-28T22:58:39.4937573Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:241)
> 2020-10-28T22:58:39.4938429Z  at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:288)
> 2020-10-28T22:58:39.4939206Z  at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:579)
> 2020-10-28T22:58:39.4939885Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651)
> 2020-10-28T22:58:39.4940490Z  at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668)
> 2020-10-28T22:58:39.4941087Z  at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253)
> 2020-10-28T22:58:39.4941733Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
> 2020-10-28T22:58:39.4942534Z  at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
> 2020-10-28T22:58:39.4943154Z  at 
> org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
> 2020-10-28T22:58:39.4943820Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138)
> 2020-10-28T22:58:39.4944540Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45)
> 2020-10-28T22:58:39.4945199Z  at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48)
> 2020-10-28T22:58:39.4945858Z  at 
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30)
> 2020-10-28T22:58:39.4946426Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
> 2020-10-28T22:58:39.4946965Z  at 
> org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
> 2020-10-28T22:58:39.4947698Z  at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.(ClusterEntrypoint.java:108)
> {code}

[jira] [Commented] (FLINK-19864) TwoInputStreamTaskTest.testWatermarkMetrics failed with "expected:<1> but was:<-9223372036854775808>"

2020-10-30 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223716#comment-17223716
 ] 

Till Rohrmann commented on FLINK-19864:
---

cc [~chesnay]

> TwoInputStreamTaskTest.testWatermarkMetrics failed with "expected:<1> but 
> was:<-9223372036854775808>"
> -
>
> Key: FLINK-19864
> URL: https://issues.apache.org/jira/browse/FLINK-19864
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392
> {code}
> 2020-10-28T22:40:44.2528420Z [ERROR] 
> testWatermarkMetrics(org.apache.flink.streaming.runtime.tasks.TwoInputStreamTaskTest)
>  Time elapsed: 1.528 s <<< FAILURE! 2020-10-28T22:40:44.2529225Z 
> java.lang.AssertionError: expected:<1> but was:<-9223372036854775808> 
> 2020-10-28T22:40:44.2541228Z at org.junit.Assert.fail(Assert.java:88) 
> 2020-10-28T22:40:44.2542157Z at 
> org.junit.Assert.failNotEquals(Assert.java:834) 2020-10-28T22:40:44.2542954Z 
> at org.junit.Assert.assertEquals(Assert.java:645) 
> 2020-10-28T22:40:44.2543456Z at 
> org.junit.Assert.assertEquals(Assert.java:631) 2020-10-28T22:40:44.2544002Z 
> at 
> org.apache.flink.streaming.runtime.tasks.TwoInputStreamTaskTest.testWatermarkMetrics(TwoInputStreamTaskTest.java:540)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19864) TwoInputStreamTaskTest.testWatermarkMetrics failed with "expected:<1> but was:<-9223372036854775808>"

2020-10-30 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-19864:
--
Fix Version/s: 1.12.0

> TwoInputStreamTaskTest.testWatermarkMetrics failed with "expected:<1> but 
> was:<-9223372036854775808>"
> -
>
> Key: FLINK-19864
> URL: https://issues.apache.org/jira/browse/FLINK-19864
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392
> {code}
> 2020-10-28T22:40:44.2528420Z [ERROR] 
> testWatermarkMetrics(org.apache.flink.streaming.runtime.tasks.TwoInputStreamTaskTest)
>  Time elapsed: 1.528 s <<< FAILURE! 2020-10-28T22:40:44.2529225Z 
> java.lang.AssertionError: expected:<1> but was:<-9223372036854775808> 
> 2020-10-28T22:40:44.2541228Z at org.junit.Assert.fail(Assert.java:88) 
> 2020-10-28T22:40:44.2542157Z at 
> org.junit.Assert.failNotEquals(Assert.java:834) 2020-10-28T22:40:44.2542954Z 
> at org.junit.Assert.assertEquals(Assert.java:645) 
> 2020-10-28T22:40:44.2543456Z at 
> org.junit.Assert.assertEquals(Assert.java:631) 2020-10-28T22:40:44.2544002Z 
> at 
> org.apache.flink.streaming.runtime.tasks.TwoInputStreamTaskTest.testWatermarkMetrics(TwoInputStreamTaskTest.java:540)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19864) TwoInputStreamTaskTest.testWatermarkMetrics failed with "expected:<1> but was:<-9223372036854775808>"

2020-10-30 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-19864:
--
Priority: Critical  (was: Major)

> TwoInputStreamTaskTest.testWatermarkMetrics failed with "expected:<1> but 
> was:<-9223372036854775808>"
> -
>
> Key: FLINK-19864
> URL: https://issues.apache.org/jira/browse/FLINK-19864
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392
> {code}
> 2020-10-28T22:40:44.2528420Z [ERROR] 
> testWatermarkMetrics(org.apache.flink.streaming.runtime.tasks.TwoInputStreamTaskTest)
>  Time elapsed: 1.528 s <<< FAILURE! 2020-10-28T22:40:44.2529225Z 
> java.lang.AssertionError: expected:<1> but was:<-9223372036854775808> 
> 2020-10-28T22:40:44.2541228Z at org.junit.Assert.fail(Assert.java:88) 
> 2020-10-28T22:40:44.2542157Z at 
> org.junit.Assert.failNotEquals(Assert.java:834) 2020-10-28T22:40:44.2542954Z 
> at org.junit.Assert.assertEquals(Assert.java:645) 
> 2020-10-28T22:40:44.2543456Z at 
> org.junit.Assert.assertEquals(Assert.java:631) 2020-10-28T22:40:44.2544002Z 
> at 
> org.apache.flink.streaming.runtime.tasks.TwoInputStreamTaskTest.testWatermarkMetrics(TwoInputStreamTaskTest.java:540)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19369) BlobClientTest.testGetFailsDuringStreamingForJobPermanentBlob hangs

2020-10-30 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223710#comment-17223710
 ] 

Matthias commented on FLINK-19369:
--

Ok, the stacktraces I mentioned in the comments seem to be unrelated a probably 
due to heavy load on the Blob storage.

But I came across 
[JDK-8241239|https://bugs.openjdk.java.net/browse/JDK-8241239] which might 
explain this failure. I'm going to close the issue for now considering that it 
only appeared once more than 1 month ago. We might have to reopen the issue in 
case we experience failures like that again.

> BlobClientTest.testGetFailsDuringStreamingForJobPermanentBlob hangs
> ---
>
> Key: FLINK-19369
> URL: https://issues.apache.org/jira/browse/FLINK-19369
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6803=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=39a61cac-5c62-532f-d2c1-dea450a66708
> {code}
> 2020-09-22T21:40:57.5304615Z "main" #1 prio=5 os_prio=0 cpu=18407.84ms 
> elapsed=1969.42s tid=0x7f0730015800 nid=0x79bd waiting for monitor entry  
> [0x7f07389fb000]
> 2020-09-22T21:40:57.5305080Zjava.lang.Thread.State: BLOCKED (on object 
> monitor)
> 2020-09-22T21:40:57.5305487Z  at 
> sun.security.ssl.SSLSocketImpl.duplexCloseOutput(java.base@11.0.7/SSLSocketImpl.java:541)
> 2020-09-22T21:40:57.5306159Z  - waiting to lock <0x8661a560> (a 
> sun.security.ssl.SSLSocketOutputRecord)
> 2020-09-22T21:40:57.5306545Z  at 
> sun.security.ssl.SSLSocketImpl.close(java.base@11.0.7/SSLSocketImpl.java:472)
> 2020-09-22T21:40:57.5307045Z  at 
> org.apache.flink.runtime.blob.BlobUtils.closeSilently(BlobUtils.java:367)
> 2020-09-22T21:40:57.5307605Z  at 
> org.apache.flink.runtime.blob.BlobServerConnection.close(BlobServerConnection.java:141)
> 2020-09-22T21:40:57.5308337Z  at 
> org.apache.flink.runtime.blob.BlobClientTest.testGetFailsDuringStreaming(BlobClientTest.java:443)
> 2020-09-22T21:40:57.5308904Z  at 
> org.apache.flink.runtime.blob.BlobClientTest.testGetFailsDuringStreamingForJobPermanentBlob(BlobClientTest.java:408)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19369) BlobClientTest.testGetFailsDuringStreamingForJobPermanentBlob hangs

2020-10-30 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias closed FLINK-19369.

Resolution: Won't Fix

> BlobClientTest.testGetFailsDuringStreamingForJobPermanentBlob hangs
> ---
>
> Key: FLINK-19369
> URL: https://issues.apache.org/jira/browse/FLINK-19369
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6803=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=39a61cac-5c62-532f-d2c1-dea450a66708
> {code}
> 2020-09-22T21:40:57.5304615Z "main" #1 prio=5 os_prio=0 cpu=18407.84ms 
> elapsed=1969.42s tid=0x7f0730015800 nid=0x79bd waiting for monitor entry  
> [0x7f07389fb000]
> 2020-09-22T21:40:57.5305080Zjava.lang.Thread.State: BLOCKED (on object 
> monitor)
> 2020-09-22T21:40:57.5305487Z  at 
> sun.security.ssl.SSLSocketImpl.duplexCloseOutput(java.base@11.0.7/SSLSocketImpl.java:541)
> 2020-09-22T21:40:57.5306159Z  - waiting to lock <0x8661a560> (a 
> sun.security.ssl.SSLSocketOutputRecord)
> 2020-09-22T21:40:57.5306545Z  at 
> sun.security.ssl.SSLSocketImpl.close(java.base@11.0.7/SSLSocketImpl.java:472)
> 2020-09-22T21:40:57.5307045Z  at 
> org.apache.flink.runtime.blob.BlobUtils.closeSilently(BlobUtils.java:367)
> 2020-09-22T21:40:57.5307605Z  at 
> org.apache.flink.runtime.blob.BlobServerConnection.close(BlobServerConnection.java:141)
> 2020-09-22T21:40:57.5308337Z  at 
> org.apache.flink.runtime.blob.BlobClientTest.testGetFailsDuringStreaming(BlobClientTest.java:443)
> 2020-09-22T21:40:57.5308904Z  at 
> org.apache.flink.runtime.blob.BlobClientTest.testGetFailsDuringStreamingForJobPermanentBlob(BlobClientTest.java:408)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19897) Improve UI related to FLIP-102

2020-10-30 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-19897:
-
Description: 
This ticket collects issues that came up after merging FLIP-102 related changes 
into master. The following issues should be fixed.
* Add Tooltip to Heap metrics cell pointing out that the max metrics might 
differ from the configured maximum value. This tooltip could be made optional 
and only appears if heap max is different from the configured value. Here's a 
proposal for the tooltip text: {{The maximum heap displayed might differ from 
the configured values depending on the used GC algorithm for this process.}}
* Rename "Network Memory Segments" into "Netty Shuffle Buffers"
* Rename "Network Garbage Collection" into "Garbage Collection"

  was:
This ticket collects issues that came up after merging FLIP-102 related changes 
into master. The following issues should be fixed.
* Add Tooltip to Heap metrics row pointing out that the max metrics might 
differ from the configured maximum value. Here's a proposal for the tooltip 
text: {{The maximum heap displayed might differ from the configured values 
depending on the used GC algorithm for this process.}}
* Rename "Network Memory Segments" into "Netty Shuffle Buffers"
* Rename "Network Garbage Collection" into "Garbage Collection"


> Improve UI related to FLIP-102
> --
>
> Key: FLINK-19897
> URL: https://issues.apache.org/jira/browse/FLINK-19897
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Reporter: Matthias
>Priority: Major
> Fix For: 1.12.0
>
>
> This ticket collects issues that came up after merging FLIP-102 related 
> changes into master. The following issues should be fixed.
> * Add Tooltip to Heap metrics cell pointing out that the max metrics might 
> differ from the configured maximum value. This tooltip could be made optional 
> and only appears if heap max is different from the configured value. Here's a 
> proposal for the tooltip text: {{The maximum heap displayed might differ from 
> the configured values depending on the used GC algorithm for this process.}}
> * Rename "Network Memory Segments" into "Netty Shuffle Buffers"
> * Rename "Network Garbage Collection" into "Garbage Collection"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19661) Implement changes required to resolve FLIP-104

2020-10-30 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-19661:
-
Summary: Implement changes required to resolve FLIP-104  (was: MlH!)

> Implement changes required to resolve FLIP-104
> --
>
> Key: FLINK-19661
> URL: https://issues.apache.org/jira/browse/FLINK-19661
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics, Runtime / Web Frontend
>Reporter: Matthias
>Assignee: Matthias
>Priority: Major
>
> This is an umbrella ticket collecting all tasks related to 
> [FLIP-104|https://cwiki.apache.org/confluence/display/FLINK/FLIP-104%3A+Add+More+Metrics+to+Jobmanager].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19661) MlH!

2020-10-30 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-19661:
-
Summary: MlH!  (was: Implement changes required to resolve FLIP-104)

> MlH!
> 
>
> Key: FLINK-19661
> URL: https://issues.apache.org/jira/browse/FLINK-19661
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics, Runtime / Web Frontend
>Reporter: Matthias
>Assignee: Matthias
>Priority: Major
>
> This is an umbrella ticket collecting all tasks related to 
> [FLIP-104|https://cwiki.apache.org/confluence/display/FLINK/FLIP-104%3A+Add+More+Metrics+to+Jobmanager].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19904) Update documentation to address difference between -Xmx and the metric for maximum heap

2020-10-30 Thread Matthias (Jira)
Matthias created FLINK-19904:


 Summary: Update documentation to address difference between -Xmx 
and the metric for maximum heap
 Key: FLINK-19904
 URL: https://issues.apache.org/jira/browse/FLINK-19904
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Runtime / Metrics
Affects Versions: 1.11.2, 1.10.2
Reporter: Matthias
 Fix For: 1.12.0


We observed a difference between the configured maximum heap and the maximum 
heap returned by the metric system. This is caused by the used garbage 
collection (see [this 
blogpost|https://plumbr.io/blog/memory-leaks/less-memory-than-xmx] for further 
details.

We should make the user aware of this in the documentation mentioning it in the 
memory configuration and the metrics page.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19897) Improve UI related to FLIP-102

2020-10-30 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-19897:
-
Description: 
This ticket collects issues that came up after merging FLIP-102 related changes 
into master. The following issues should be fixed.
* Add Tooltip to Heap metrics row pointing out that the max metrics might 
differ from the configured maximum value. Here's a proposal for the tooltip 
text: {{The maximum heap displayed might differ from the configured values 
depending on the used GC algorithm for this process.}}
* Rename "Network Memory Segments" into "Netty Shuffle Buffers"
* Rename "Network Garbage Collection" into "Garbage Collection"

  was:
This ticket collects issues that came up after merging FLIP-102 related changes 
into master. The following issues should be fixed.
* Add Tooltip to Heap metrics row pointing out that the max metrics might 
differ from the configured maximum value. Here's a proposal for the tooltip 
text: {{The maximum heap displayed might differ from the configured values 
depending on the used GC algorithm for this process.}}
* Rename "Network Memory Segments" into "Netty Shuffle Buffer"
* Rename "Network Garbage Collection" into "Garbage Collection"


> Improve UI related to FLIP-102
> --
>
> Key: FLINK-19897
> URL: https://issues.apache.org/jira/browse/FLINK-19897
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Reporter: Matthias
>Priority: Major
> Fix For: 1.12.0
>
>
> This ticket collects issues that came up after merging FLIP-102 related 
> changes into master. The following issues should be fixed.
> * Add Tooltip to Heap metrics row pointing out that the max metrics might 
> differ from the configured maximum value. Here's a proposal for the tooltip 
> text: {{The maximum heap displayed might differ from the configured values 
> depending on the used GC algorithm for this process.}}
> * Rename "Network Memory Segments" into "Netty Shuffle Buffers"
> * Rename "Network Garbage Collection" into "Garbage Collection"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19903) Allow to read metadata for filesystem connector

2020-10-30 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-19903:
-
Parent: FLINK-15869
Issue Type: Sub-task  (was: Improvement)

> Allow to read metadata for filesystem connector
> ---
>
> Key: FLINK-19903
> URL: https://issues.apache.org/jira/browse/FLINK-19903
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Ruben Laguna
>Priority: Major
>
> Use case: 
> I have a dataset where they embedded some information in the filenames
> (200k files) and I need to extract that as a new column.
> In Spark I could `
> .withColumn("id",f.split(f.reverse(f.split(f.input_file_name(),'/'))[0],'\.')[0])`
>  but I don't see how can I do the same with Flink.
>  
> Apparently there is 
> [FLIP-107|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors]]
>  which would allow SQL connectors and formats to expose metadata. 
>  
> So it would be great for the Filesystem SQL connector to expose the path. 
> Ideally for me the path could be exposed via a function that read the 
> metadata. So I could write  something akin to `SELECT input_file_name(),* 
> FROM table1`
>  
>  
> [1]: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors]
> [2]: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Can-I-get-the-filename-as-a-column-td39096.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19903) Allow to read metadata in filesystem connector

2020-10-30 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-19903:
-
Summary: Allow to read metadata in filesystem connector  (was: Allow to 
read metadata for filesystem connector)

> Allow to read metadata in filesystem connector
> --
>
> Key: FLINK-19903
> URL: https://issues.apache.org/jira/browse/FLINK-19903
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Ruben Laguna
>Priority: Major
>
> Use case: 
> I have a dataset where they embedded some information in the filenames
> (200k files) and I need to extract that as a new column.
> In Spark I could `
> .withColumn("id",f.split(f.reverse(f.split(f.input_file_name(),'/'))[0],'\.')[0])`
>  but I don't see how can I do the same with Flink.
>  
> Apparently there is 
> [FLIP-107|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors]]
>  which would allow SQL connectors and formats to expose metadata. 
>  
> So it would be great for the Filesystem SQL connector to expose the path. 
> Ideally for me the path could be exposed via a function that read the 
> metadata. So I could write  something akin to `SELECT input_file_name(),* 
> FROM table1`
>  
>  
> [1]: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors]
> [2]: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Can-I-get-the-filename-as-a-column-td39096.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19903) Allow to read metadata for filesystem connector

2020-10-30 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-19903:
-
Summary: Allow to read metadata for filesystem connector  (was: Expose 
filesystem metada )

> Allow to read metadata for filesystem connector
> ---
>
> Key: FLINK-19903
> URL: https://issues.apache.org/jira/browse/FLINK-19903
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Ruben Laguna
>Priority: Major
>
> Use case: 
> I have a dataset where they embedded some information in the filenames
> (200k files) and I need to extract that as a new column.
> In Spark I could `
> .withColumn("id",f.split(f.reverse(f.split(f.input_file_name(),'/'))[0],'\.')[0])`
>  but I don't see how can I do the same with Flink.
>  
> Apparently there is 
> [FLIP-107|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors]]
>  which would allow SQL connectors and formats to expose metadata. 
>  
> So it would be great for the Filesystem SQL connector to expose the path. 
> Ideally for me the path could be exposed via a function that read the 
> metadata. So I could write  something akin to `SELECT input_file_name(),* 
> FROM table1`
>  
>  
> [1]: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors]
> [2]: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Can-I-get-the-filename-as-a-column-td39096.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19903) Expose filesystem metada

2020-10-30 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-19903:
-
Summary: Expose filesystem metada   (was: Implement equivalent of Spark's 
f.input_file_name())

> Expose filesystem metada 
> -
>
> Key: FLINK-19903
> URL: https://issues.apache.org/jira/browse/FLINK-19903
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Ruben Laguna
>Priority: Major
>
> Use case: 
> I have a dataset where they embedded some information in the filenames
> (200k files) and I need to extract that as a new column.
> In Spark I could `
> .withColumn("id",f.split(f.reverse(f.split(f.input_file_name(),'/'))[0],'\.')[0])`
>  but I don't see how can I do the same with Flink.
>  
> Apparently there is 
> [FLIP-107|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors]]
>  which would allow SQL connectors and formats to expose metadata. 
>  
> So it would be great for the Filesystem SQL connector to expose the path. 
> Ideally for me the path could be exposed via a function that read the 
> metadata. So I could write  something akin to `SELECT input_file_name(),* 
> FROM table1`
>  
>  
> [1]: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors]
> [2]: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Can-I-get-the-filename-as-a-column-td39096.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19903) Implement equivalent of Spark's f.input_file_name()

2020-10-30 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-19903:
-
Component/s: (was: API / Core)
 Table SQL / API

> Implement equivalent of Spark's f.input_file_name()
> ---
>
> Key: FLINK-19903
> URL: https://issues.apache.org/jira/browse/FLINK-19903
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Ruben Laguna
>Priority: Major
>
> Use case: 
> I have a dataset where they embedded some information in the filenames
> (200k files) and I need to extract that as a new column.
> In Spark I could `
> .withColumn("id",f.split(f.reverse(f.split(f.input_file_name(),'/'))[0],'\.')[0])`
>  but I don't see how can I do the same with Flink.
>  
> Apparently there is 
> [FLIP-107|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors]]
>  which would allow SQL connectors and formats to expose metadata. 
>  
> So it would be great for the Filesystem SQL connector to expose the path. 
> Ideally for me the path could be exposed via a function that read the 
> metadata. So I could write  something akin to `SELECT input_file_name(),* 
> FROM table1`
>  
>  
> [1]: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors]
> [2]: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Can-I-get-the-filename-as-a-column-td39096.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19901) Unable to exclude metrics variables for the last metrics reporter.

2020-10-30 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-19901:
-
Fix Version/s: 1.11.3
   1.10.3
   1.12.0

> Unable to exclude metrics variables for the last metrics reporter.
> --
>
> Key: FLINK-19901
> URL: https://issues.apache.org/jira/browse/FLINK-19901
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.10.0
>Reporter: Truong Duc Kien
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.12.0, 1.10.3, 1.11.3
>
>
> We discovered a bug that leads to the setting {{scope.variables.excludes}} 
> being ignored for the very last metric reporter.
> Because {{reporterIndex}} was incremented before the length check, the last 
> metrics reporter setting is overflowed back to 0.
> Interestingly, this bug does not trigger when there's only one metric 
> reporter, because slot 0 is actually overwritten with that reporter's 
> variables instead of being used to store all variables in that case.
> {code:java}
> public abstract class AbstractMetricGroup> 
> implements MetricGroup {
> ...
>   public Map getAllVariables(int reporterIndex, 
> Set excludedVariables) {
>   // offset cache location to account for general cache at 
> position 0
>   reporterIndex += 1;
>   if (reporterIndex < 0 || reporterIndex >= 
> logicalScopeStrings.length) {
>   reporterIndex = 0;
>   }
>   // if no variables are excluded (which is the default!) we 
> re-use the general variables map to save space
>   return internalGetAllVariables(excludedVariables.isEmpty() ? 0 
> : reporterIndex, excludedVariables);
>   }
> ...
> {code}
>  [Github link to the above 
> code|https://github.com/apache/flink/blob/3bf5786655c3bb914ce02ebb0e4a1863b205b829/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroup.java#L122]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19901) Unable to exclude metrics variables for the last metrics reporter.

2020-10-30 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-19901:
-
Affects Version/s: (was: 1.11.2)
   1.10.0

> Unable to exclude metrics variables for the last metrics reporter.
> --
>
> Key: FLINK-19901
> URL: https://issues.apache.org/jira/browse/FLINK-19901
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.10.0
>Reporter: Truong Duc Kien
>Assignee: Chesnay Schepler
>Priority: Major
>
> We discovered a bug that leads to the setting {{scope.variables.excludes}} 
> being ignored for the very last metric reporter.
> Because {{reporterIndex}} was incremented before the length check, the last 
> metrics reporter setting is overflowed back to 0.
> Interestingly, this bug does not trigger when there's only one metric 
> reporter, because slot 0 is actually overwritten with that reporter's 
> variables instead of being used to store all variables in that case.
> {code:java}
> public abstract class AbstractMetricGroup> 
> implements MetricGroup {
> ...
>   public Map getAllVariables(int reporterIndex, 
> Set excludedVariables) {
>   // offset cache location to account for general cache at 
> position 0
>   reporterIndex += 1;
>   if (reporterIndex < 0 || reporterIndex >= 
> logicalScopeStrings.length) {
>   reporterIndex = 0;
>   }
>   // if no variables are excluded (which is the default!) we 
> re-use the general variables map to save space
>   return internalGetAllVariables(excludedVariables.isEmpty() ? 0 
> : reporterIndex, excludedVariables);
>   }
> ...
> {code}
>  [Github link to the above 
> code|https://github.com/apache/flink/blob/3bf5786655c3bb914ce02ebb0e4a1863b205b829/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroup.java#L122]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19903) Implement equivalent of Spark's f.input_file_name()

2020-10-30 Thread Ruben Laguna (Jira)
Ruben Laguna created FLINK-19903:


 Summary: Implement equivalent of Spark's f.input_file_name()
 Key: FLINK-19903
 URL: https://issues.apache.org/jira/browse/FLINK-19903
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Reporter: Ruben Laguna


Use case: 

I have a dataset where they embedded some information in the filenames
(200k files) and I need to extract that as a new column.

In Spark I could `
.withColumn("id",f.split(f.reverse(f.split(f.input_file_name(),'/'))[0],'\.')[0])`
 but I don't see how can I do the same with Flink.

 

Apparently there is 
[FLIP-107|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors]]
 which would allow SQL connectors and formats to expose metadata. 

 

So it would be great for the Filesystem SQL connector to expose the path. 

Ideally for me the path could be exposed via a function that read the metadata. 
So I could write  something akin to `SELECT input_file_name(),* FROM table1`

 

 

[1]: 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Handling+of+metadata+in+SQL+connectors]

[2]: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Can-I-get-the-filename-as-a-column-td39096.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19901) Unable to exclude metrics variables for the last metrics reporter.

2020-10-30 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-19901:


Assignee: Chesnay Schepler

> Unable to exclude metrics variables for the last metrics reporter.
> --
>
> Key: FLINK-19901
> URL: https://issues.apache.org/jira/browse/FLINK-19901
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.11.2
>Reporter: Truong Duc Kien
>Assignee: Chesnay Schepler
>Priority: Major
>
> We discovered a bug that leads to the setting {{scope.variables.excludes}} 
> being ignored for the very last metric reporter.
> Because {{reporterIndex}} was incremented before the length check, the last 
> metrics reporter setting is overflowed back to 0.
> Interestingly, this bug does not trigger when there's only one metric 
> reporter, because slot 0 is actually overwritten with that reporter's 
> variables instead of being used to store all variables in that case.
> {code:java}
> public abstract class AbstractMetricGroup> 
> implements MetricGroup {
> ...
>   public Map getAllVariables(int reporterIndex, 
> Set excludedVariables) {
>   // offset cache location to account for general cache at 
> position 0
>   reporterIndex += 1;
>   if (reporterIndex < 0 || reporterIndex >= 
> logicalScopeStrings.length) {
>   reporterIndex = 0;
>   }
>   // if no variables are excluded (which is the default!) we 
> re-use the general variables map to save space
>   return internalGetAllVariables(excludedVariables.isEmpty() ? 0 
> : reporterIndex, excludedVariables);
>   }
> ...
> {code}
>  [Github link to the above 
> code|https://github.com/apache/flink/blob/3bf5786655c3bb914ce02ebb0e4a1863b205b829/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroup.java#L122]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18363) Add user classloader to context in DeSerializationSchema

2020-10-30 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-18363.

Resolution: Implemented

Implemented in da67eeb90dda7d6610c5328f061ab417a5c4edbc

> Add user classloader to context in DeSerializationSchema
> 
>
> Key: FLINK-18363
> URL: https://issues.apache.org/jira/browse/FLINK-18363
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Timo Walther
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Currently, it is only possible to retrieve the metric group in the open() 
> method of DeserializationSchema and SerializationSchema. We should also 
> expose the user code classloader.
> We should also create some utility methods like 
> `InitializationContext.from(RuntimeContext)` instead of implementing the full 
> interface for every call to `open()`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19762) Selecting Job-ID and TaskManager-ID in web UI covers more than the ID

2020-10-30 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-19762.

Resolution: Fixed

master: ddc163a98a35f07f9afebddabd0ca65a7007255b

> Selecting Job-ID and TaskManager-ID in web UI covers more than the ID
> -
>
> Key: FLINK-19762
> URL: https://issues.apache.org/jira/browse/FLINK-19762
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.10.2, 1.11.2
>Reporter: Matthias
>Assignee: CloseRiver
>Priority: Minor
>  Labels: pull-request-available, starter, usability
> Fix For: 1.12.0
>
> Attachments: Screenshot 2020-10-22 at 09.47.41.png, 
> image-2020-10-29-20-32-39-368.png, image-2020-10-30-16-12-17-038.png
>
>
> Not only the ID is selected when trying to copy the Job ID from the web UI by 
> double-clicking it. See the attached screenshot:
>  !Screenshot 2020-10-22 at 09.47.41.png!
> The same thing happens for the TaskManager ID in the corresponding 
> TaskManager Overview page.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19835) Don't emit intermediate watermarks from sources in BATCH execution mode

2020-10-30 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-19835.

Fix Version/s: 1.12.0
   Resolution: Implemented

master: 16e40b6239d41dd33db1b7bcc4fa1618fc4acb66

And commits around it.

> Don't emit intermediate watermarks from sources in BATCH execution mode
> ---
>
> Key: FLINK-19835
> URL: https://issues.apache.org/jira/browse/FLINK-19835
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Currently, both sources and watermark/timestamp operators can emit watermarks 
> that we don't really need. We only need a final watermark in BATCH execution 
> mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19902) Adjust JobMasterTest to be compatible

2020-10-30 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-19902:


 Summary: Adjust JobMasterTest to be compatible
 Key: FLINK-19902
 URL: https://issues.apache.org/jira/browse/FLINK-19902
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination, Tests
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.12.0


The {{JobMasterTest}} is not fully compatible with declarative resource 
management; for example it listens to slot requests made to the 
ResourceManager, or accepting slots for a job that has no vertices (==no 
requirements).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19901) Unable to exclude metrics variables for the last metrics reporter.

2020-10-30 Thread Truong Duc Kien (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223565#comment-17223565
 ] 

Truong Duc Kien commented on FLINK-19901:
-

[~chesnay] Can you take look ? Thanks.

> Unable to exclude metrics variables for the last metrics reporter.
> --
>
> Key: FLINK-19901
> URL: https://issues.apache.org/jira/browse/FLINK-19901
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.11.2
>Reporter: Truong Duc Kien
>Priority: Major
>
> We discovered a bug that leads to the setting {{scope.variables.excludes}} 
> being ignored for the very last metric reporter.
> Because {{reporterIndex}} was incremented before the length check, the last 
> metrics reporter setting is overflowed back to 0.
> Interestingly, this bug does not trigger when there's only one metric 
> reporter, because slot 0 is actually overwritten with that reporter's 
> variables instead of being used to store all variables in that case.
> {code:java}
> public abstract class AbstractMetricGroup> 
> implements MetricGroup {
> ...
>   public Map getAllVariables(int reporterIndex, 
> Set excludedVariables) {
>   // offset cache location to account for general cache at 
> position 0
>   reporterIndex += 1;
>   if (reporterIndex < 0 || reporterIndex >= 
> logicalScopeStrings.length) {
>   reporterIndex = 0;
>   }
>   // if no variables are excluded (which is the default!) we 
> re-use the general variables map to save space
>   return internalGetAllVariables(excludedVariables.isEmpty() ? 0 
> : reporterIndex, excludedVariables);
>   }
> ...
> {code}
>  [Github link to the above 
> code|https://github.com/apache/flink/blob/3bf5786655c3bb914ce02ebb0e4a1863b205b829/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroup.java#L122]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19901) Unable to exclude metrics variables for the last metrics reporter.

2020-10-30 Thread Truong Duc Kien (Jira)
Truong Duc Kien created FLINK-19901:
---

 Summary: Unable to exclude metrics variables for the last metrics 
reporter.
 Key: FLINK-19901
 URL: https://issues.apache.org/jira/browse/FLINK-19901
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Metrics
Affects Versions: 1.11.2
Reporter: Truong Duc Kien


We discovered a bug that leads to the setting {{scope.variables.excludes}} 
being ignored for the very last metric reporter.

Because {{reporterIndex}} was incremented before the length check, the last 
metrics reporter setting is overflowed back to 0.

Interestingly, this bug does not trigger when there's only one metric reporter, 
because slot 0 is actually overwritten with that reporter's variables instead 
of being used to store all variables in that case.
{code:java}
public abstract class AbstractMetricGroup> 
implements MetricGroup {
...
public Map getAllVariables(int reporterIndex, 
Set excludedVariables) {
// offset cache location to account for general cache at 
position 0
reporterIndex += 1;
if (reporterIndex < 0 || reporterIndex >= 
logicalScopeStrings.length) {
reporterIndex = 0;
}
// if no variables are excluded (which is the default!) we 
re-use the general variables map to save space
return internalGetAllVariables(excludedVariables.isEmpty() ? 0 
: reporterIndex, excludedVariables);
}

...
{code}
 [Github link to the above 
code|https://github.com/apache/flink/blob/3bf5786655c3bb914ce02ebb0e4a1863b205b829/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroup.java#L122]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19900) Wrong property name for surefire log4j configuration

2020-10-30 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-19900:
-
Priority: Trivial  (was: Major)

> Wrong property name for surefire log4j configuration
> 
>
> Key: FLINK-19900
> URL: https://issues.apache.org/jira/browse/FLINK-19900
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Log4j uses {{log4j.configurationFile}} system property for passing a 
> configuration file. In our surefire configuration we use 
> {{log4j.configuration}} property instead which has no effect.
> {code}
> 
>   org.apache.maven.plugins
>   maven-surefire-plugin
>   2.22.1
>   
>  
>   
> 
>   
> ${log4j.configuration}
>   
> 
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19900) Wrong property name for surefire log4j configuration

2020-10-30 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223563#comment-17223563
 ] 

Chesnay Schepler commented on FLINK-19900:
--

We can just remove it; the log4j file is picked up automatically in both maven 
and the IDE, and on CI we explicitly use the correct log4j property.

> Wrong property name for surefire log4j configuration
> 
>
> Key: FLINK-19900
> URL: https://issues.apache.org/jira/browse/FLINK-19900
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Log4j uses {{log4j.configurationFile}} system property for passing a 
> configuration file. In our surefire configuration we use 
> {{log4j.configuration}} property instead which has no effect.
> {code}
> 
>   org.apache.maven.plugins
>   maven-surefire-plugin
>   2.22.1
>   
>  
>   
> 
>   
> ${log4j.configuration}
>   
> 
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13834: [FLINK-19872][Formats/Csv] Support to parse millisecond for TIME type in CSV format

2020-10-30 Thread GitBox


flinkbot edited a comment on pull request #13834:
URL: https://github.com/apache/flink/pull/13834#issuecomment-718406146


   
   ## CI report:
   
   * 3792d96b470ee584cd60265925bf578d65fffa64 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8582)
 
   * dfa6b28695625848a5798b38f7aa38b67029797a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13833: [FLINK-19867][table-common] Validation fails for UDF that accepts var…

2020-10-30 Thread GitBox


flinkbot edited a comment on pull request #13833:
URL: https://github.com/apache/flink/pull/13833#issuecomment-718355139


   
   ## CI report:
   
   * e36269afc70e39d6b97a39b99676ab2301081b66 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8635)
 
   * 3436faf8dfda2a925c0b66b4e81bc571c21cfffd Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8670)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19900) Wrong property name for surefire log4j configuration

2020-10-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19900:
---
Labels: pull-request-available  (was: )

> Wrong property name for surefire log4j configuration
> 
>
> Key: FLINK-19900
> URL: https://issues.apache.org/jira/browse/FLINK-19900
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Log4j uses {{log4j.configurationFile}} system property for passing a 
> configuration file. In our surefire configuration we use 
> {{log4j.configuration}} property instead which has no effect.
> {code}
> 
>   org.apache.maven.plugins
>   maven-surefire-plugin
>   2.22.1
>   
>  
>   
> 
>   
> ${log4j.configuration}
>   
> 
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys opened a new pull request #13858: [FLINK-19900] Fix surefire log4j configuration

2020-10-30 Thread GitBox


dawidwys opened a new pull request #13858:
URL: https://github.com/apache/flink/pull/13858


   ## What is the purpose of the change
   
   Log4j uses `log4j.configurationFile` system property for passing a 
configuration file. In our surefire configuration we use `log4j.configuration` 
property instead which has no effect.
   
   
   ## Verifying this change
   
   Pass a different configuration file with logging enabled via cmd and verify 
that tests respect the passed configuration.
   You can pass alternative configuration like this:
   
   ```
   mvn '-Dlog4j.configuration=[path]/log4j2-on.properties' clean install
   ```
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13828: [FLINK-19835] Don't emit intermediate watermarks from sources in BATCH execution mode

2020-10-30 Thread GitBox


flinkbot edited a comment on pull request #13828:
URL: https://github.com/apache/flink/pull/13828#issuecomment-718083567


   
   ## CI report:
   
   * cbef03730f1eef907bcc6aaacda3525410cb6080 UNKNOWN
   * b2332fd1a5e22460e043267a3b7d296e59c804b6 UNKNOWN
   * 098a3e0039824c161d1fe815a65e6a982f166681 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8581)
 
   * aa503b5eaed5991fa4bfcc5fa32fbb538cf6f6ea UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19900) Wrong property name for surefire log4j configuration

2020-10-30 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-19900:
-
Description: 
Log4j uses {{log4j.configurationFile}} system property for passing a 
configuration file. In our surefire configuration we use 
{{log4j.configuration}} property instead which has no effect.

{code}

org.apache.maven.plugins
maven-surefire-plugin
2.22.1

 



${log4j.configuration}


 
{code}

  was:
Log4j uses {{log4j.configurationFile}} system property for passing a 
configuration file. In our surefire configuration we use 
{{log4j.configuration}} property instead which has no effect.

{code}

org.apache.maven.plugins
maven-surefire-plugin
2.22.1


${flink.forkCount}

${flink.reuseForks}
false


0${surefire.forkNumber}

${log4j.configuration}

${hadoop.version}

true

${project.basedir}

-Xms256m -Xmx2048m 
-Dmvn.forkNumber=${surefire.forkNumber} -XX:+UseG1GC

{code}


> Wrong property name for surefire log4j configuration
> 
>
> Key: FLINK-19900
> URL: https://issues.apache.org/jira/browse/FLINK-19900
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.12.0
>
>
> Log4j uses {{log4j.configurationFile}} system property for passing a 
> configuration file. In our surefire configuration we use 
> {{log4j.configuration}} property instead which has no effect.
> {code}
> 
>   org.apache.maven.plugins
>   maven-surefire-plugin
>   2.22.1
>   
>  
>   
> 
>   
> ${log4j.configuration}
>   
> 
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13766: [FLINK-19703][runtime] Untrack a result partition if its producer task failed in TaskManager

2020-10-30 Thread GitBox


flinkbot edited a comment on pull request #13766:
URL: https://github.com/apache/flink/pull/13766#issuecomment-715255564


   
   ## CI report:
   
   * b46f97dd6b1fa67b4dd169867c1967877ac8534c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8263)
 
   * 8fe7e75c390442fa9aab16ab05c3cc5e0fc6d553 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8669)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19900) Wrong property name for surefire log4j configuration

2020-10-30 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-19900:


 Summary: Wrong property name for surefire log4j configuration
 Key: FLINK-19900
 URL: https://issues.apache.org/jira/browse/FLINK-19900
 Project: Flink
  Issue Type: Bug
  Components: Build System
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.12.0


Log4j uses {{log4j.configurationFile}} system property for passing a 
configuration file. In our surefire configuration we use 
{{log4j.configuration}} property instead which has no effect.

{code}

org.apache.maven.plugins
maven-surefire-plugin
2.22.1


${flink.forkCount}

${flink.reuseForks}
false


0${surefire.forkNumber}

${log4j.configuration}

${hadoop.version}

true

${project.basedir}

-Xms256m -Xmx2048m 
-Dmvn.forkNumber=${surefire.forkNumber} -XX:+UseG1GC

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13648: [FLINK-19632] Introduce a new ResultPartitionType for Approximate Local Recovery

2020-10-30 Thread GitBox


flinkbot edited a comment on pull request #13648:
URL: https://github.com/apache/flink/pull/13648#issuecomment-709153806


   
   ## CI report:
   
   * ccacf6664b0f6ec3630a939f8dd155f53226e4b7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8365)
 
   * c306ae3258d3e27f22abafe3db27eeb413467dac Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8668)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13523: [FLINK-15981][network] Implement FileRegion way to shuffle file-based blocking partition in network stack

2020-10-30 Thread GitBox


flinkbot edited a comment on pull request #13523:
URL: https://github.com/apache/flink/pull/13523#issuecomment-701325670


   
   ## CI report:
   
   * 81b40b201a250bc3c738b30d3d820657b9f749ee UNKNOWN
   * c012a3b687c0ceab428913a5e6cea6f9d3948932 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8655)
 
   * 33a3c9cec53624573cc00a6e5e42f0d921b1e231 UNKNOWN
   * 31de3dfae7e14c5c262d8326611a1dd29ea9b834 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8667)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19824) Refactor and merge SupportsComputedColumnPushDown and SupportsWatermarkPushDown interfaces

2020-10-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-19824.
---
Fix Version/s: 1.12.0
   Resolution: Fixed

Fixed in master: 3bf5786655c3bb914ce02ebb0e4a1863b205b829

> Refactor and merge SupportsComputedColumnPushDown and 
> SupportsWatermarkPushDown interfaces
> --
>
> Key: FLINK-19824
> URL: https://issues.apache.org/jira/browse/FLINK-19824
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> As discussed in mailing list [1], the existing SupportsComputedColumnPushDown 
> and SupportsWatermarkPushDown are confusing and hard to implement for 
> connectors. The 
> {{SupportsComputedColumnPushDown}} only used for watermark push down. 
> Therefore, 
>  combining them into a single interface {{SupportsWatermarkPushDown}} would 
> be better and also work. The proposed interface looks like this:
> {code:java}
> public interface SupportsWatermarkPushDown {
> 
> void 
> applyWatermark(org.apache.flink.table.sources.wmstrategies.WatermarkStrategy
>  watermarkStrategy);
> 
> }
> {code}
> [1]: 
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Merge-SupportsComputedColumnPushDown-and-SupportsWatermarkPushDown-td44387.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19276) Allow to read metadata for Debezium format

2020-10-30 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-19276:


Assignee: Timo Walther

> Allow to read metadata for Debezium format
> --
>
> Key: FLINK-19276
> URL: https://issues.apache.org/jira/browse/FLINK-19276
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Major
>
> Expose the metadata mentioned in FLIP-107 for Debezium format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #13806: [FLINK-19824][table-api] Refactor and merge SupportsComputedColumnPushDown and SupportsWatermarkPushDown interfaces

2020-10-30 Thread GitBox


wuchong merged pull request #13806:
URL: https://github.com/apache/flink/pull/13806


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #13806: [FLINK-19824][table-api] Refactor and merge SupportsComputedColumnPushDown and SupportsWatermarkPushDown interfaces

2020-10-30 Thread GitBox


wuchong commented on pull request #13806:
URL: https://github.com/apache/flink/pull/13806#issuecomment-719471595


   Thanks for the reviewing @godfreyhe .



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19276) Allow to read metadata for Debezium format

2020-10-30 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223558#comment-17223558
 ] 

Timo Walther commented on FLINK-19276:
--

Thanks for offering your help [~lsy]. I would like to assign this issue to 
myself in order to validate the new APIs and if all assumptions and interfaces 
play nicely together. I hope that is ok?

> Allow to read metadata for Debezium format
> --
>
> Key: FLINK-19276
> URL: https://issues.apache.org/jira/browse/FLINK-19276
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Reporter: Timo Walther
>Priority: Major
>
> Expose the metadata mentioned in FLIP-107 for Debezium format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19896) Support first-n-rows deduplication in the Deduplicate operator

2020-10-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-19896:

Fix Version/s: (was: 1.11.2)

> Support first-n-rows deduplication in the Deduplicate operator
> --
>
> Key: FLINK-19896
> URL: https://issues.apache.org/jira/browse/FLINK-19896
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner, Table SQL / Runtime
>Affects Versions: 1.12.0, 1.11.3
>Reporter: Jun Zhang
>Priority: Major
>
> Currently Deduplicate operator only supports first-row deduplication (ordered 
> by proc-time). In scenario of first-n-rows deduplication, the planner has to 
> resort to Rank operator.  However, Rank operator is less efficient than 
> Deduplicate due to larger state and more state access.
> This issue proposes to extend DeduplicateKeepFirstRowFunction to support 
> first-n-rows deduplication. And the original first-row deduplication would be 
> a special case of first-n-rows deduplication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19896) Support first-n-rows deduplication in the Deduplicate operator

2020-10-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-19896:

Affects Version/s: (was: 1.11.3)
   (was: 1.12.0)

> Support first-n-rows deduplication in the Deduplicate operator
> --
>
> Key: FLINK-19896
> URL: https://issues.apache.org/jira/browse/FLINK-19896
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: Jun Zhang
>Priority: Major
>
> Currently Deduplicate operator only supports first-row deduplication (ordered 
> by proc-time). In scenario of first-n-rows deduplication, the planner has to 
> resort to Rank operator.  However, Rank operator is less efficient than 
> Deduplicate due to larger state and more state access.
> This issue proposes to extend DeduplicateKeepFirstRowFunction to support 
> first-n-rows deduplication. And the original first-row deduplication would be 
> a special case of first-n-rows deduplication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19896) Support first-n-rows deduplication in the Deduplicate operator

2020-10-30 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223557#comment-17223557
 ] 

Jark Wu edited comment on FLINK-19896 at 10/30/20, 10:25 AM:
-

I'm not sure about this. If we support first-n-rows for 
`DeduplicateKeepFirstRowFunction`, then it breaks the semantic of "Deduplicate".
Besides, if we want to support first-n-rows for 
`DeduplicateKeepFirstRowFunction`, the `DeduplicateKeepFirstRowFunction` will 
end up to be another top-n (Rank) operator implementation. 
I think we should focus on how to improve the performance of Rank operator in 
this case. 


was (Author: jark):
I'm not sure about this. If we support first-n-rows for 
`DeduplicateKeepFirstRowFunction`, then it breaks the semantic of "Deduplicate".
Besides, if we want to support first-n-rows for 
`DeduplicateKeepFirstRowFunction`, the `DeduplicateKeepFirstRowFunction` will 
end up to be another top-n (Rank) operator implementation. 
I think we should focus how to improve the performance of Rank operator in this 
case. 

> Support first-n-rows deduplication in the Deduplicate operator
> --
>
> Key: FLINK-19896
> URL: https://issues.apache.org/jira/browse/FLINK-19896
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner, Table SQL / Runtime
>Affects Versions: 1.12.0, 1.11.3
>Reporter: Jun Zhang
>Priority: Major
> Fix For: 1.11.2
>
>
> Currently Deduplicate operator only supports first-row deduplication (ordered 
> by proc-time). In scenario of first-n-rows deduplication, the planner has to 
> resort to Rank operator.  However, Rank operator is less efficient than 
> Deduplicate due to larger state and more state access.
> This issue proposes to extend DeduplicateKeepFirstRowFunction to support 
> first-n-rows deduplication. And the original first-row deduplication would be 
> a special case of first-n-rows deduplication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13331: [FLINK-19079][table-runtime] Import rowtime deduplicate operator

2020-10-30 Thread GitBox


flinkbot edited a comment on pull request #13331:
URL: https://github.com/apache/flink/pull/13331#issuecomment-687585649


   
   ## CI report:
   
   * f3b2192b6cc216c691e833e40e52dfc64807e0a0 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8642)
 
   * d4463d6be66ef3a125e23c4a63c4efad8359454d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19899) [Kinesis][EFO] Optimise error handling to use a separate exception delivery mechanism

2020-10-30 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223555#comment-17223555
 ] 

Danny Cranmer commented on FLINK-19899:
---

[~tzulitai] as discussed, I will publish a PR soon.

> [Kinesis][EFO] Optimise error handling to use a separate exception delivery 
> mechanism
> -
>
> Key: FLINK-19899
> URL: https://issues.apache.org/jira/browse/FLINK-19899
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Danny Cranmer
>Priority: Major
> Fix For: 1.12.0
>
>
> *Background*
> There is a queue used to pass events between the network client and consumer 
> application. When an error is thrown in the network thread, the queue is 
> cleared to make space for the error event. This means that records will be 
> thrown away to make space for errors (the records would be subsequently 
> reloaded from the shard).
> *Scope*
> Add a new mechanism to pass exceptions between threads, meaning data does not 
> need to be discarded. When an error is thrown, the error event will be 
> processed by the consumer once all of the records have been processed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19896) Support first-n-rows deduplication in the Deduplicate operator

2020-10-30 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223557#comment-17223557
 ] 

Jark Wu commented on FLINK-19896:
-

I'm not sure about this. If we support first-n-rows for 
`DeduplicateKeepFirstRowFunction`, then it breaks the semantic of "Deduplicate".
Besides, if we want to support first-n-rows for 
`DeduplicateKeepFirstRowFunction`, the `DeduplicateKeepFirstRowFunction` will 
end up to be another top-n (Rank) operator implementation. 
I think we should focus how to improve the performance of Rank operator in this 
case. 

> Support first-n-rows deduplication in the Deduplicate operator
> --
>
> Key: FLINK-19896
> URL: https://issues.apache.org/jira/browse/FLINK-19896
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner, Table SQL / Runtime
>Affects Versions: 1.12.0, 1.11.3
>Reporter: Jun Zhang
>Priority: Major
> Fix For: 1.11.2
>
>
> Currently Deduplicate operator only supports first-row deduplication (ordered 
> by proc-time). In scenario of first-n-rows deduplication, the planner has to 
> resort to Rank operator.  However, Rank operator is less efficient than 
> Deduplicate due to larger state and more state access.
> This issue proposes to extend DeduplicateKeepFirstRowFunction to support 
> first-n-rows deduplication. And the original first-row deduplication would be 
> a special case of first-n-rows deduplication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19898) [Kinesis][EFO] Ignore ReadTimeoutException from SubcribeToShard retry policy

2020-10-30 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17223556#comment-17223556
 ] 

Danny Cranmer commented on FLINK-19898:
---

[~tzulitai] as discussed, I will publish a PR soon.

> [Kinesis][EFO] Ignore ReadTimeoutException from SubcribeToShard retry policy
> 
>
> Key: FLINK-19898
> URL: https://issues.apache.org/jira/browse/FLINK-19898
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Danny Cranmer
>Priority: Major
> Fix For: 1.12.0
>
>
> *Background* 
> The Flink Kinesis EFO consumer has a {{SubscribeToShard}} retry policy which 
> will terminate the job after a given number of subsequent attempt failures. 
> In high backpressure scenarios the Netty HTTP Client throws a 
> {{ReadTimeoutException}} when the consumer takes longer than 30s to process a 
> batch. If this happens (by default) 10 times in a row, the job will 
> terminate. There is no need to terminate in this condition, and the restart 
> results in the job falling further behind.
> *Scope*
> Exclude the {{ReadTimeoutException}} from the {{SubscribeToShard}} retry 
> policy, such that that connector will gracefully reconnect once the consumer 
> has processed the queued records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19899) [Kinesis][EFO] Optimise error handling to use a separate exception delivery mechanism

2020-10-30 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-19899:
-

 Summary: [Kinesis][EFO] Optimise error handling to use a separate 
exception delivery mechanism
 Key: FLINK-19899
 URL: https://issues.apache.org/jira/browse/FLINK-19899
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kinesis
Reporter: Danny Cranmer
 Fix For: 1.12.0


*Background*

There is a queue used to pass events between the network client and consumer 
application. When an error is thrown in the network thread, the queue is 
cleared to make space for the error event. This means that records will be 
thrown away to make space for errors (the records would be subsequently 
reloaded from the shard).

*Scope*

Add a new mechanism to pass exceptions between threads, meaning data does not 
need to be discarded. When an error is thrown, the error event will be 
processed by the consumer once all of the records have been processed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19756) Use multi-input optimization by default

2020-10-30 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-19756:
--

Assignee: godfrey he

> Use multi-input optimization by default
> ---
>
> Key: FLINK-19756
> URL: https://issues.apache.org/jira/browse/FLINK-19756
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Caizhi Weng
>Assignee: godfrey he
>Priority: Major
> Fix For: 1.12.0
>
>
> After the multiple input operator is introduced we should use this 
> optimization by default. This will affect a large amount of plan tests so we 
> will do this in an independent subtask.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19898) [Kinesis][EFO] Ignore ReadTimeoutException from SubcribeToShard retry policy

2020-10-30 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-19898:
-

 Summary: [Kinesis][EFO] Ignore ReadTimeoutException from 
SubcribeToShard retry policy
 Key: FLINK-19898
 URL: https://issues.apache.org/jira/browse/FLINK-19898
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kinesis
Reporter: Danny Cranmer
 Fix For: 1.12.0


*Background* 

The Flink Kinesis EFO consumer has a {{SubscribeToShard}} retry policy which 
will terminate the job after a given number of subsequent attempt failures. In 
high backpressure scenarios the Netty HTTP Client throws a 
{{ReadTimeoutException}} when the consumer takes longer than 30s to process a 
batch. If this happens (by default) 10 times in a row, the job will terminate. 
There is no need to terminate in this condition, and the restart results in the 
job falling further behind.

*Scope*

Exclude the {{ReadTimeoutException}} from the {{SubscribeToShard}} retry 
policy, such that that connector will gracefully reconnect once the consumer 
has processed the queued records.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #13834: [FLINK-19872][Formats/Csv] Support to parse millisecond for TIME type in CSV format

2020-10-30 Thread GitBox


wuchong commented on a change in pull request #13834:
URL: https://github.com/apache/flink/pull/13834#discussion_r514996865



##
File path: 
flink-formats/flink-csv/src/test/java/org/apache/flink/formats/csv/CsvRowDataSerDeSchemaTest.java
##
@@ -109,6 +109,54 @@ public void testSerializeDeserialize() throws Exception {
new byte[] {107, 3, 11});
}
 
+   @Test
+   public void testSerializeDeserializeForTime() throws Exception {
+   testFieldForTime(
+   TIME(3),
+   "12:12:12.232",
+   "12:12:12.232",
+   (deserSchema) -> deserSchema.setNullLiteral("null"),
+   (serSchema) -> serSchema.setNullLiteral("null"),

Review comment:
   This works in my environment:
   
   ```java
testField(
TIME(3),
"12:12:12.232421",
LocalTime.parse("12:12:12.232"),
(deserSchema) -> deserSchema.setNullLiteral("null"),
",");
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on pull request #13595: [FLINK-19582][network] Introduce sort-merge based blocking shuffle to Flink

2020-10-30 Thread GitBox


zhijiangW commented on pull request #13595:
URL: https://github.com/apache/flink/pull/13595#issuecomment-719467007


   Thanks for the updates @wsry ! 
   
   I think most of my previous comments were addressed except for 
https://github.com/apache/flink/pull/13595#discussion_r514988635. And as 
@gaoyunhaii also mentioned above, it is better to supplement some tests for 
covering empty subpartition case.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #13595: [FLINK-19582][network] Introduce sort-merge based blocking shuffle to Flink

2020-10-30 Thread GitBox


zhijiangW commented on a change in pull request #13595:
URL: https://github.com/apache/flink/pull/13595#discussion_r514994161



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PartitionedFileWriter.java
##
@@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.partition;
+
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.util.IOUtils;
+
+import javax.annotation.concurrent.NotThreadSafe;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileChannel;
+import java.nio.file.Path;
+import java.nio.file.StandardOpenOption;
+import java.util.Arrays;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * File writer which can write buffers and generate the {@link 
PartitionedFile}. Data is written region by region.
+ * Before writing any data, {@link #open} must be called and before writing a 
new region, {@link #startNewRegion}
+ * must be called. After writing all data, {@link #finish} must be called to 
close all opened files and return the
+ * target {@link PartitionedFile}.
+ */
+@NotThreadSafe
+public class PartitionedFileWriter {
+
+   /** Used when writing data buffers. */
+   private final ByteBuffer[] header = 
BufferReaderWriterUtil.allocatedWriteBufferArray();
+
+   /** Buffer for writing region index. */
+   private final ByteBuffer indexBuffer;
+
+   /** Number of channels. When writing a buffer, target subpartition must 
be in this range. */
+   private final int numSubpartitions;
+
+   /** Data file path of the target {@link PartitionedFile}. */
+   private final Path dataFilePath;
+
+   /** Index file path of the target {@link PartitionedFile}. */
+   private final Path indexFilePath;
+
+   /** Number of bytes written for each subpartition in the current 
region. */
+   private final long[] subpartitionBytes;
+
+   /** Number of buffers written for each subpartition in the current 
region. */
+   private final int[] subpartitionBuffers;
+
+   /** Opened data file channel of the target {@link PartitionedFile}. */
+   private FileChannel dataFileChannel;
+
+   /** Opened index file channel of the target {@link PartitionedFile}. */
+   private FileChannel indexFileChannel;
+
+   /** Number of bytes written to the target {@link PartitionedFile}. */
+   private long totalBytesWritten;
+
+   /** Number of regions written to the target {@link PartitionedFile}. */
+   private int numRegions;
+
+   /** Current subpartition to write. Buffer writing must be in 
subpartition order within each region. */
+   private int currentSubpartition;
+
+   /** Whether all index data can be cached in memory or not. */
+   private boolean canCacheAllIndexData = true;
+
+   /** Whether this file writer is finished. */
+   private boolean isFinished;
+
+   public PartitionedFileWriter(String basePath, int numSubpartitions) {
+   checkArgument(basePath != null, "Base path must not be null.");
+   checkArgument(numSubpartitions > 0, "Illegal number of 
subpartitions.");
+
+   this.numSubpartitions = numSubpartitions;
+   this.subpartitionBytes = new long[numSubpartitions];
+   this.subpartitionBuffers = new int[numSubpartitions];
+   this.dataFilePath = new File(basePath + 
PartitionedFile.DATA_FILE_SUFFIX).toPath();
+   this.indexFilePath = new File(basePath + 
PartitionedFile.INDEX_FILE_SUFFIX).toPath();
+
+   this.indexBuffer = ByteBuffer.allocate(100 * 1024 * 
PartitionedFile.INDEX_ENTRY_SIZE);
+   indexBuffer.order(PartitionedFile.DEFAULT_BYTE_ORDER);
+   }
+
+   /**
+* Opens the {@link PartitionedFile} for writing.
+*
+* Note: The caller is responsible for releasing the failed {@link 
PartitionedFile} if any exception
+* occurs.
+*/
+   public void open() throws IOException {
+   

[GitHub] [flink] zhijiangW commented on pull request #13523: [FLINK-15981][network] Implement FileRegion way to shuffle file-based blocking partition in network stack

2020-10-30 Thread GitBox


zhijiangW commented on pull request #13523:
URL: https://github.com/apache/flink/pull/13523#issuecomment-719465267


   @StephanEwen, I have updated the codes for addressing above comments. 
   
   Regards the tests, I think the existing `BoundedBlockingSubpartitionTest` 
and `BoundedBlockingSubpartitionWriteReadTest` already covered the local read 
part. 
   I will further consider how to cover the remote network read by existing 
tests after you approve the current implementation way. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19897) Improve UI related to FLIP-102

2020-10-30 Thread Matthias (Jira)
Matthias created FLINK-19897:


 Summary: Improve UI related to FLIP-102
 Key: FLINK-19897
 URL: https://issues.apache.org/jira/browse/FLINK-19897
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Web Frontend
Reporter: Matthias
 Fix For: 1.12.0


This ticket collects issues that came up after merging FLIP-102 related changes 
into master. The following issues should be fixed.
* Add Tooltip to Heap metrics row pointing out that the max metrics might 
differ from the configured maximum value. Here's a proposal for the tooltip 
text: {{The maximum heap displayed might differ from the configured values 
depending on the used GC algorithm for this process.}}
* Rename "Network Memory Segments" into "Netty Shuffle Buffer"
* Rename "Network Garbage Collection" into "Garbage Collection"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13857: [FLINK-16268][table-planner-blink] Failed to run rank over window wit…

2020-10-30 Thread GitBox


flinkbot commented on pull request #13857:
URL: https://github.com/apache/flink/pull/13857#issuecomment-719465188


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1020e7c964728b427fabfe685f1df771757092b5 (Fri Oct 30 
10:13:59 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-16268).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache opened a new pull request #13857: [FLINK-16268][table-planner-blink] Failed to run rank over window wit…

2020-10-30 Thread GitBox


lirui-apache opened a new pull request #13857:
URL: https://github.com/apache/flink/pull/13857


   …h Hive built-in functions
   
   
   
   ## What is the purpose of the change
   
   Fix NPEs when using hive UDFs.
   
   
   ## Brief change log
   
 - Properly handle hive UDFs in UserDefinedFunctionUtils
 - Add test cases
   
   
   ## Verifying this change
   
   Added test cases
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? NA
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] aljoscha commented on pull request #11895: [FLINK-10911][scala-shell] Enable flink-scala-shell with Scala 2.12

2020-10-30 Thread GitBox


aljoscha commented on pull request #11895:
URL: https://github.com/apache/flink/pull/11895#issuecomment-719463468


   @zjffdu Should we still try and get this in for Flink 1.12? Sorry for the 
very long delay on this one! 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on pull request #13853: [FLINK-19837][DataStream] Don't emit intermediate watermarks from watermark operators in BATCH execution mode

2020-10-30 Thread GitBox


SteNicholas commented on pull request #13853:
URL: https://github.com/apache/flink/pull/13853#issuecomment-719462500


   @aljoscha , sorry for my bad implementations. I agree with the steps you 
provided that use `assignTimestampsAndWatermarks()` logical operation instead 
of creating an operator directly in `assignTimestampsAndWatermarks()` and add a 
`TransformationTranslator`. I would like to follow your comments of this 
implementations immediately. Thanks to your detailed suggestion.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] aljoscha commented on a change in pull request #13844: [FLINK-18363] Add user classloader to context in DeSerializationSchema

2020-10-30 Thread GitBox


aljoscha commented on a change in pull request #13844:
URL: https://github.com/apache/flink/pull/13844#discussion_r514989016



##
File path: 
flink-core/src/main/java/org/apache/flink/api/common/serialization/DeserializationSchema.java
##
@@ -112,5 +114,13 @@ default void deserialize(byte[] message, Collector out) 
throws IOException {
 * @see MetricGroup
 */
MetricGroup getMetricGroup();
+
+   /**
+* Gets the ClassLoader to load classes that are not in 
system's classpath, but are part of
+* the jar file of a user job.
+*
+* @see UserCodeClassLoader

Review comment:
   ```suggestion
 * Gets the {@link ClassLoader} to load classes that are not in 
system's classpath, but are part of
 * the jar file of a user job.
 *
 * @see {@link UserCodeClassLoader}
   ```

##
File path: 
flink-core/src/main/java/org/apache/flink/api/common/serialization/SerializationSchema.java
##
@@ -73,5 +75,13 @@ default void open(InitializationContext context) throws 
Exception {
 * @see MetricGroup
 */
MetricGroup getMetricGroup();
+
+   /**
+* Gets the ClassLoader to load classes that are not in 
system's classpath, but are part of
+* the jar file of a user job.
+*
+* @see UserCodeClassLoader

Review comment:
   ```suggestion
/**
 * Gets the {@link ClassLoader} to load classes that are not in 
system's classpath, but are part of
 * the jar file of a user job.
 *
 * @see {@link UserCodeClassLoader}
   ```

##
File path: 
flink-core/src/main/java/org/apache/flink/api/common/serialization/RuntimeContextInitializationContextAdapters.java
##
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.common.serialization;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.metrics.MetricGroup;
+import org.apache.flink.util.UserCodeClassLoader;
+
+import java.util.function.Function;
+
+/**
+ * A utility adapters between {@link RuntimeContext}
+ * and {@link DeserializationSchema.InitializationContext}
+ * or {@link SerializationSchema.InitializationContext}.

Review comment:
   ```suggestion
* A set of adapters between {@link RuntimeContext}
* and {@link DeserializationSchema.InitializationContext}
* or {@link SerializationSchema.InitializationContext}.
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache closed pull request #11209: [FLINK-16268][hive][table-planner-blink] Failed to run rank over wind…

2020-10-30 Thread GitBox


lirui-apache closed pull request #11209:
URL: https://github.com/apache/flink/pull/11209


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #13595: [FLINK-19582][network] Introduce sort-merge based blocking shuffle to Flink

2020-10-30 Thread GitBox


zhijiangW commented on a change in pull request #13595:
URL: https://github.com/apache/flink/pull/13595#discussion_r514990038



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SortMergeResultPartition.java
##
@@ -0,0 +1,360 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.partition;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.event.AbstractEvent;
+import org.apache.flink.runtime.io.disk.FileChannelManager;
+import org.apache.flink.runtime.io.network.api.EndOfPartitionEvent;
+import org.apache.flink.runtime.io.network.api.serialization.EventSerializer;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.BufferCompressor;
+import org.apache.flink.runtime.io.network.buffer.BufferPool;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.util.function.SupplierWithException;
+
+import javax.annotation.Nullable;
+import javax.annotation.concurrent.GuardedBy;
+import javax.annotation.concurrent.NotThreadSafe;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.runtime.io.network.buffer.Buffer.DataType;
+import static 
org.apache.flink.runtime.io.network.partition.SortBuffer.BufferWithChannel;
+import static org.apache.flink.util.Preconditions.checkElementIndex;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * {@link SortMergeResultPartition} appends records and events to {@link 
SortBuffer} and after the {@link SortBuffer}
+ * is full, all data in the {@link SortBuffer} will be copied and spilled to a 
{@link PartitionedFile} in subpartition
+ * index order sequentially. Large records that can not be appended to an 
empty {@link SortBuffer} will be spilled to
+ * the {@link PartitionedFile} separately.
+ */
+@NotThreadSafe
+public class SortMergeResultPartition extends ResultPartition {
+
+   private final Object lock = new Object();
+
+   /** All active readers which are consuming data from this result 
partition now. */
+   @GuardedBy("lock")
+   private final Set readers = new 
HashSet<>();
+
+   /** {@link PartitionedFile} produced by this result partition. */
+   @GuardedBy("lock")
+   private PartitionedFile resultFile;
+
+   /** Used to generate random file channel ID. */
+   private final FileChannelManager channelManager;
+
+   /** Number of data buffers (excluding events) written for each 
subpartition. */
+   private final int[] numDataBuffers;
+
+   /** A piece of unmanaged memory for data writing. */
+   private final MemorySegment writeBuffer;
+
+   /** Size of network buffer and write buffer. */
+   private final int networkBufferSize;
+
+   /** Current {@link SortBuffer} to append records to. */
+   private SortBuffer currentSortBuffer;
+
+   /** File writer for this result partition. */
+   private PartitionedFileWriter fileWriter;
+
+   public SortMergeResultPartition(
+   String owningTaskName,
+   int partitionIndex,
+   ResultPartitionID partitionId,
+   ResultPartitionType partitionType,
+   int numSubpartitions,
+   int numTargetKeyGroups,
+   int networkBufferSize,
+   ResultPartitionManager partitionManager,
+   FileChannelManager channelManager,
+   @Nullable BufferCompressor bufferCompressor,
+   SupplierWithException 
bufferPoolFactory) {
+
+   super(
+   owningTaskName,
+   partitionIndex,
+   partitionId,
+   partitionType,
+   numSubpartitions,
+   numTargetKeyGroups,
+   

[jira] [Updated] (FLINK-16268) Failed to run rank over window with Hive built-in functions

2020-10-30 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-16268:
---
Fix Version/s: 1.11.3

> Failed to run rank over window with Hive built-in functions
> ---
>
> Key: FLINK-16268
> URL: https://issues.apache.org/jira/browse/FLINK-16268
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.3
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following test:
> {code}
>   @Test
>   public void test() throws Exception {
>   hiveShell.execute("create table emp (dep string,name 
> string,salary int)");
>   hiveShell.insertInto("default", "emp").addRow("1", "A", 
> 1).addRow("1", "B", 2).addRow("2", "C", 3).commit();
>   TableEnvironment tableEnv = // create table env...
>   tableEnv.unloadModule("core");
>   tableEnv.loadModule("hive", new 
> HiveModule(hiveCatalog.getHiveVersion()));
>   tableEnv.loadModule("core", CoreModule.INSTANCE);
>   List results = 
> TableUtils.collectToList(tableEnv.sqlQuery("select dep,name,rank() over 
> (partition by dep order by salary) as rnk from emp"));
>   }
> {code}
> fails with:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.flink.table.functions.hive.conversion.HiveInspectors.toInspectors(HiveInspectors.java:126)
>   at 
> org.apache.flink.table.functions.hive.HiveGenericUDF.getHiveResultType(HiveGenericUDF.java:97)
>   at 
> org.apache.flink.table.functions.hive.HiveScalarFunction.getResultType(HiveScalarFunction.java:75)
>   at 
> org.apache.flink.table.planner.functions.utils.UserDefinedFunctionUtils$.getResultTypeOfScalarFunction(UserDefinedFunctionUtils.scala:620)
>   at 
> org.apache.flink.table.planner.expressions.PlannerScalarFunctionCall.resultType(call.scala:165)
>   at 
> org.apache.flink.table.planner.expressions.PlannerTypeInferenceUtilImpl.runTypeInference(PlannerTypeInferenceUtilImpl.java:75)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ResolveCallByArgumentsRule$ResolvingCallVisitor.runLegacyTypeInference(ResolveCallByArgumentsRule.java:213)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ResolveCallByArgumentsRule$ResolvingCallVisitor.lambda$visit$2(ResolveCallByArgumentsRule.java:134)
>   at java.util.Optional.orElseGet(Optional.java:267)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ResolveCallByArgumentsRule$ResolvingCallVisitor.visit(ResolveCallByArgumentsRule.java:134)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ResolveCallByArgumentsRule$ResolvingCallVisitor.visit(ResolveCallByArgumentsRule.java:89)
>   at 
> org.apache.flink.table.expressions.ApiExpressionVisitor.visit(ApiExpressionVisitor.java:39)
>   at 
> org.apache.flink.table.expressions.UnresolvedCallExpression.accept(UnresolvedCallExpression.java:135)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ResolveCallByArgumentsRule.lambda$apply$0(ResolveCallByArgumentsRule.java:83)
>   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267)
>   at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>   at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>   at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ResolveCallByArgumentsRule.apply(ResolveCallByArgumentsRule.java:84)
> ..
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #13595: [FLINK-19582][network] Introduce sort-merge based blocking shuffle to Flink

2020-10-30 Thread GitBox


zhijiangW commented on a change in pull request #13595:
URL: https://github.com/apache/flink/pull/13595#discussion_r514989760



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SortMergeResultPartition.java
##
@@ -0,0 +1,360 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.partition;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.event.AbstractEvent;
+import org.apache.flink.runtime.io.disk.FileChannelManager;
+import org.apache.flink.runtime.io.network.api.EndOfPartitionEvent;
+import org.apache.flink.runtime.io.network.api.serialization.EventSerializer;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.BufferCompressor;
+import org.apache.flink.runtime.io.network.buffer.BufferPool;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.util.function.SupplierWithException;
+
+import javax.annotation.Nullable;
+import javax.annotation.concurrent.GuardedBy;
+import javax.annotation.concurrent.NotThreadSafe;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.runtime.io.network.buffer.Buffer.DataType;
+import static 
org.apache.flink.runtime.io.network.partition.SortBuffer.BufferWithChannel;
+import static org.apache.flink.util.Preconditions.checkElementIndex;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * {@link SortMergeResultPartition} appends records and events to {@link 
SortBuffer} and after the {@link SortBuffer}
+ * is full, all data in the {@link SortBuffer} will be copied and spilled to a 
{@link PartitionedFile} in subpartition
+ * index order sequentially. Large records that can not be appended to an 
empty {@link SortBuffer} will be spilled to
+ * the {@link PartitionedFile} separately.
+ */
+@NotThreadSafe
+public class SortMergeResultPartition extends ResultPartition {
+
+   private final Object lock = new Object();
+
+   /** All active readers which are consuming data from this result 
partition now. */
+   @GuardedBy("lock")
+   private final Set readers = new 
HashSet<>();
+
+   /** {@link PartitionedFile} produced by this result partition. */
+   @GuardedBy("lock")
+   private PartitionedFile resultFile;
+
+   /** Used to generate random file channel ID. */
+   private final FileChannelManager channelManager;
+
+   /** Number of data buffers (excluding events) written for each 
subpartition. */
+   private final int[] numDataBuffers;
+
+   /** A piece of unmanaged memory for data writing. */
+   private final MemorySegment writeBuffer;
+
+   /** Size of network buffer and write buffer. */
+   private final int networkBufferSize;
+
+   /** Current {@link SortBuffer} to append records to. */
+   private SortBuffer currentSortBuffer;
+
+   /** File writer for this result partition. */
+   private PartitionedFileWriter fileWriter;
+
+   public SortMergeResultPartition(
+   String owningTaskName,
+   int partitionIndex,
+   ResultPartitionID partitionId,
+   ResultPartitionType partitionType,
+   int numSubpartitions,
+   int numTargetKeyGroups,
+   int networkBufferSize,
+   ResultPartitionManager partitionManager,
+   FileChannelManager channelManager,
+   @Nullable BufferCompressor bufferCompressor,
+   SupplierWithException 
bufferPoolFactory) {
+
+   super(
+   owningTaskName,
+   partitionIndex,
+   partitionId,
+   partitionType,
+   numSubpartitions,
+   numTargetKeyGroups,
+   

[GitHub] [flink] flinkbot edited a comment on pull request #13855: [FLINK-19894][python] Use iloc for positional slicing instead of direct slicing in from_pandas

2020-10-30 Thread GitBox


flinkbot edited a comment on pull request #13855:
URL: https://github.com/apache/flink/pull/13855#issuecomment-71901


   
   ## CI report:
   
   * c4ef2ac24d63eecab3b6c3d1c91083d7b4c4c75b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8665)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13856: [FLINK-19892][python] Replace __metaclass__ field with metaclass keyword

2020-10-30 Thread GitBox


flinkbot edited a comment on pull request #13856:
URL: https://github.com/apache/flink/pull/13856#issuecomment-719444991


   
   ## CI report:
   
   * adc25b804bb9c79925056c5fd6a312cd4b35dbe0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8666)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #13595: [FLINK-19582][network] Introduce sort-merge based blocking shuffle to Flink

2020-10-30 Thread GitBox


zhijiangW commented on a change in pull request #13595:
URL: https://github.com/apache/flink/pull/13595#discussion_r514988635



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SortMergeResultPartition.java
##
@@ -0,0 +1,360 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.partition;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.event.AbstractEvent;
+import org.apache.flink.runtime.io.disk.FileChannelManager;
+import org.apache.flink.runtime.io.network.api.EndOfPartitionEvent;
+import org.apache.flink.runtime.io.network.api.serialization.EventSerializer;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.BufferCompressor;
+import org.apache.flink.runtime.io.network.buffer.BufferPool;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.util.function.SupplierWithException;
+
+import javax.annotation.Nullable;
+import javax.annotation.concurrent.GuardedBy;
+import javax.annotation.concurrent.NotThreadSafe;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.runtime.io.network.buffer.Buffer.DataType;
+import static 
org.apache.flink.runtime.io.network.partition.SortBuffer.BufferWithChannel;
+import static org.apache.flink.util.Preconditions.checkElementIndex;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * {@link SortMergeResultPartition} appends records and events to {@link 
SortBuffer} and after the {@link SortBuffer}
+ * is full, all data in the {@link SortBuffer} will be copied and spilled to a 
{@link PartitionedFile} in subpartition
+ * index order sequentially. Large records that can not be appended to an 
empty {@link SortBuffer} will be spilled to
+ * the {@link PartitionedFile} separately.
+ */
+@NotThreadSafe
+public class SortMergeResultPartition extends ResultPartition {
+
+   private final Object lock = new Object();
+
+   /** All active readers which are consuming data from this result 
partition now. */
+   @GuardedBy("lock")
+   private final Set readers = new 
HashSet<>();
+
+   /** {@link PartitionedFile} produced by this result partition. */
+   @GuardedBy("lock")
+   private PartitionedFile resultFile;
+
+   /** Used to generate random file channel ID. */
+   private final FileChannelManager channelManager;
+
+   /** Number of data buffers (excluding events) written for each 
subpartition. */
+   private final int[] numDataBuffers;
+
+   /** A piece of unmanaged memory for data writing. */
+   private final MemorySegment writeBuffer;
+
+   /** Size of network buffer and write buffer. */
+   private final int networkBufferSize;
+
+   /** Current {@link SortBuffer} to append records to. */
+   private SortBuffer currentSortBuffer;
+
+   /** File writer for this result partition. */
+   private PartitionedFileWriter fileWriter;
+
+   public SortMergeResultPartition(
+   String owningTaskName,
+   int partitionIndex,
+   ResultPartitionID partitionId,
+   ResultPartitionType partitionType,
+   int numSubpartitions,
+   int numTargetKeyGroups,
+   int networkBufferSize,
+   ResultPartitionManager partitionManager,
+   FileChannelManager channelManager,
+   @Nullable BufferCompressor bufferCompressor,
+   SupplierWithException 
bufferPoolFactory) {
+
+   super(
+   owningTaskName,
+   partitionIndex,
+   partitionId,
+   partitionType,
+   numSubpartitions,
+   numTargetKeyGroups,
+   

[GitHub] [flink] flinkbot edited a comment on pull request #13852: [FLINK-19875][table] Integrate file compaction to filesystem connector

2020-10-30 Thread GitBox


flinkbot edited a comment on pull request #13852:
URL: https://github.com/apache/flink/pull/13852#issuecomment-719283002


   
   ## CI report:
   
   * eaa32dd4c7689a8241956b8441105d760bd1747a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8650)
 
   * cfc8a594e8c116bb711c13c05cf4ef62a5bdc98a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8659)
 
   * e8dc8dcf6c81859a3bab198a560e239de6269fa6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8664)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   >