[jira] [Comment Edited] (FLINK-22968) Improve exception message when using toAppendStream[String]

2021-06-10 Thread DaChun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361421#comment-17361421
 ] 

DaChun edited comment on FLINK-22968 at 6/11/21, 5:55 AM:
--

thanks you very much [~jark] ,ღ( ´・ Bi Xin.


was (Author: dachun777):
thanks you very much Jark Wu,love you.

> Improve exception message when using toAppendStream[String]
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Assignee: Nicholas Jiang
>Priority: Major
> Attachments: test.scala, this_error.txt
>
>
> {code:scala}
> package com.bytedance.one
> import org.apache.flink.streaming.api.scala.{DataStream, 
> StreamExecutionEnvironment}
> import org.apache.flink.table.api.Table
> import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
> import org.apache.flink.api.scala._
> object test {
>   def main(args: Array[String]): Unit = {
> val env: StreamExecutionEnvironment = StreamExecutionEnvironment
>   .createLocalEnvironmentWithWebUI()
> val stream: DataStream[String] = env.readTextFile("data/wc.txt")
> val tableEnvironment: StreamTableEnvironment = 
> StreamTableEnvironment.create(env)
> val table: Table = tableEnvironment.fromDataStream(stream)
> tableEnvironment.createTemporaryView("wc", table)
> val res: Table = tableEnvironment.sqlQuery("select * from wc")
> tableEnvironment.toAppendStream[String](res).print()
> env.execute("test")
>   }
> }
> {code}
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
> h3. Summary
> We should improve the exception message a bit, we can throw the given type 
> (String) is not allowed in {{toAppendStream}}.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22968) Improve exception message when using toAppendStream[String]

2021-06-10 Thread DaChun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361421#comment-17361421
 ] 

DaChun commented on FLINK-22968:


thanks you very much Jark Wu,love you.

> Improve exception message when using toAppendStream[String]
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Assignee: Nicholas Jiang
>Priority: Major
> Attachments: test.scala, this_error.txt
>
>
> {code:scala}
> package com.bytedance.one
> import org.apache.flink.streaming.api.scala.{DataStream, 
> StreamExecutionEnvironment}
> import org.apache.flink.table.api.Table
> import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
> import org.apache.flink.api.scala._
> object test {
>   def main(args: Array[String]): Unit = {
> val env: StreamExecutionEnvironment = StreamExecutionEnvironment
>   .createLocalEnvironmentWithWebUI()
> val stream: DataStream[String] = env.readTextFile("data/wc.txt")
> val tableEnvironment: StreamTableEnvironment = 
> StreamTableEnvironment.create(env)
> val table: Table = tableEnvironment.fromDataStream(stream)
> tableEnvironment.createTemporaryView("wc", table)
> val res: Table = tableEnvironment.sqlQuery("select * from wc")
> tableEnvironment.toAppendStream[String](res).print()
> env.execute("test")
>   }
> }
> {code}
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
> h3. Summary
> We should improve the exception message a bit, we can throw the given type 
> (String) is not allowed in {{toAppendStream}}.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-22952) docs_404_check fail on azure due to ruby version not available

2021-06-10 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-22952.

Resolution: Fixed

> docs_404_check fail on azure due to ruby version not available
> --
>
> Key: FLINK-22952
> URL: https://issues.apache.org/jira/browse/FLINK-22952
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.12.4
>Reporter: Xintong Song
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.12.5, 1.13.2
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18852=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43=404fcc1b-71ae-54f6-61c8-430a6aeff2b5
> {code}
> Starting: UseRubyVersion
> ==
> Task : Use Ruby version
> Description  : Use the specified version of Ruby from the tool cache, 
> optionally adding it to the PATH
> Version  : 0.186.0
> Author   : Microsoft Corporation
> Help : 
> https://docs.microsoft.com/azure/devops/pipelines/tasks/tool/use-ruby-version
> ==
> ##[error]Version spec = 2.4 for architecture %25s did not match any version 
> in Agent.ToolsDirectory.
> Available versions: /opt/hostedtoolcache
> 2.5.9,2.6.7,2.7.3,3.0.1
> If this is a Microsoft-hosted agent, check that this image supports 
> side-by-side versions of Ruby at https://aka.ms/hosted-agent-software.
> If this is a self-hosted agent, see how to configure side-by-side Ruby 
> versions at https://go.microsoft.com/fwlink/?linkid=2005989.
> Finishing: UseRubyVersion
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22952) docs_404_check fail on azure due to ruby version not available

2021-06-10 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17360927#comment-17360927
 ] 

Chesnay Schepler edited comment on FLINK-22952 at 6/11/21, 5:47 AM:


master:
084f8460020a6188e2c36048b6de51ff2e4c7538
2976323b08e1430951191214eb7b2eb3b1a474f6

1.13:
f1c85ffc4416b84cfa638b84f2613db047195bfb
dc44739c53cac1d8177e077742a761bbfb39cb59

1.12:
aa3cd4ba3ad92d3a5b086ef7d6830852ff9f6c7b


was (Author: zentol):
master: 084f8460020a6188e2c36048b6de51ff2e4c7538

1.13: f1c85ffc4416b84cfa638b84f2613db047195bfb

1.12: aa3cd4ba3ad92d3a5b086ef7d6830852ff9f6c7b

> docs_404_check fail on azure due to ruby version not available
> --
>
> Key: FLINK-22952
> URL: https://issues.apache.org/jira/browse/FLINK-22952
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.12.4
>Reporter: Xintong Song
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.12.5, 1.13.2
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18852=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43=404fcc1b-71ae-54f6-61c8-430a6aeff2b5
> {code}
> Starting: UseRubyVersion
> ==
> Task : Use Ruby version
> Description  : Use the specified version of Ruby from the tool cache, 
> optionally adding it to the PATH
> Version  : 0.186.0
> Author   : Microsoft Corporation
> Help : 
> https://docs.microsoft.com/azure/devops/pipelines/tasks/tool/use-ruby-version
> ==
> ##[error]Version spec = 2.4 for architecture %25s did not match any version 
> in Agent.ToolsDirectory.
> Available versions: /opt/hostedtoolcache
> 2.5.9,2.6.7,2.7.3,3.0.1
> If this is a Microsoft-hosted agent, check that this image supports 
> side-by-side versions of Ruby at https://aka.ms/hosted-agent-software.
> If this is a self-hosted agent, see how to configure side-by-side Ruby 
> versions at https://go.microsoft.com/fwlink/?linkid=2005989.
> Finishing: UseRubyVersion
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22970) The documentation for `TO_TIMESTAMP` UDF has an incorrect description

2021-06-10 Thread Wei-Che Wei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Che Wei updated FLINK-22970:

Description: 
According to this ML discussion 
[http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/confused-about-TO-TIMESTAMP-document-description-td44352.html]

The description for `TO_TIMESTAMP` udf is not correct. It will use UTC+0 
timezone instead of session timezone. We should fix this documentation.

  was:
According to this ML discussion 
[http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/confused-about-TO-TIMESTAMP-document-description-td44352.html]

The description for `TO_TIMESTAMP` udf is not correct. It will use UTC+0 
timezone instead of session timezone.


> The documentation for `TO_TIMESTAMP` UDF has an incorrect description
> -
>
> Key: FLINK-22970
> URL: https://issues.apache.org/jira/browse/FLINK-22970
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Wei-Che Wei
>Priority: Minor
>
> According to this ML discussion 
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/confused-about-TO-TIMESTAMP-document-description-td44352.html]
> The description for `TO_TIMESTAMP` udf is not correct. It will use UTC+0 
> timezone instead of session timezone. We should fix this documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22970) The documentation for `TO_TIMESTAMP` UDF has an incorrect description

2021-06-10 Thread Wei-Che Wei (Jira)
Wei-Che Wei created FLINK-22970:
---

 Summary: The documentation for `TO_TIMESTAMP` UDF has an incorrect 
description
 Key: FLINK-22970
 URL: https://issues.apache.org/jira/browse/FLINK-22970
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Wei-Che Wei


According to this ML discussion 
[http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/confused-about-TO-TIMESTAMP-document-description-td44352.html]

The description for `TO_TIMESTAMP` udf is not correct. It will use UTC+0 
timezone instead of session timezone.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-22968) Improve exception message when using toAppendStream[String]

2021-06-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-22968:
---

Assignee: Nicholas Jiang

> Improve exception message when using toAppendStream[String]
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Assignee: Nicholas Jiang
>Priority: Major
> Attachments: test.scala, this_error.txt
>
>
> {code:scala}
> package com.bytedance.one
> import org.apache.flink.streaming.api.scala.{DataStream, 
> StreamExecutionEnvironment}
> import org.apache.flink.table.api.Table
> import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
> import org.apache.flink.api.scala._
> object test {
>   def main(args: Array[String]): Unit = {
> val env: StreamExecutionEnvironment = StreamExecutionEnvironment
>   .createLocalEnvironmentWithWebUI()
> val stream: DataStream[String] = env.readTextFile("data/wc.txt")
> val tableEnvironment: StreamTableEnvironment = 
> StreamTableEnvironment.create(env)
> val table: Table = tableEnvironment.fromDataStream(stream)
> tableEnvironment.createTemporaryView("wc", table)
> val res: Table = tableEnvironment.sqlQuery("select * from wc")
> tableEnvironment.toAppendStream[String](res).print()
> env.execute("test")
>   }
> }
> {code}
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
> h3. Summary
> We should improve the exception message a bit, we can throw the given type 
> (String) is not allowed in {{toAppendStream}}.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15526) Add RollingPolicy to PartitionWriters

2021-06-10 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361380#comment-17361380
 ] 

luoyuxia edited comment on FLINK-15526 at 6/11/21, 3:59 AM:


[~lzljs3620320] I see DefaultRollingPolicy.shouldRollOnEvent can already avoid 
writing a big file by file size. Does the Jira aim to add a policy that can 
avoid writing a big file by record numebr?


was (Author: luoyuxia):
[~lzljs3620320] I see DefaultRollingPolicy.shouldRollOnEvent already can avoid 
writing a big file by file size. Does the Jira aim to add a policy that can 
avoid writing a big file by record numebr?

> Add RollingPolicy to PartitionWriters
> -
>
> Key: FLINK-15526
> URL: https://issues.apache.org/jira/browse/FLINK-15526
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Ecosystem
>Reporter: Jingsong Lee
>Priority: Major
>
> Now our partition writers will write a whole file in a checkpoint if there is 
> no partition change.
> Sometimes this file is too big, it is better to add a RollingPolicy for 
> controlling file size.
> We can just add policy controlled by file size and record number.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15526) Add RollingPolicy to PartitionWriters

2021-06-10 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361380#comment-17361380
 ] 

luoyuxia commented on FLINK-15526:
--

[~lzljs3620320] I see DefaultRollingPolicy.shouldRollOnEvent already can avoid 
writing a big file by file size. Does the Jira aim to add a policy that can 
avoid writing a big file by record numebr?

> Add RollingPolicy to PartitionWriters
> -
>
> Key: FLINK-15526
> URL: https://issues.apache.org/jira/browse/FLINK-15526
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Ecosystem
>Reporter: Jingsong Lee
>Priority: Major
>
> Now our partition writers will write a whole file in a checkpoint if there is 
> no partition change.
> Sometimes this file is too big, it is better to add a RollingPolicy for 
> controlling file size.
> We can just add policy controlled by file size and record number.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22968) Improve exception message when using toAppendStream[String]

2021-06-10 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361378#comment-17361378
 ] 

Nicholas Jiang commented on FLINK-22968:


[~jark], I would like to take this issue. Could you please assign this to me?

> Improve exception message when using toAppendStream[String]
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Priority: Major
> Attachments: test.scala, this_error.txt
>
>
> {code:scala}
> package com.bytedance.one
> import org.apache.flink.streaming.api.scala.{DataStream, 
> StreamExecutionEnvironment}
> import org.apache.flink.table.api.Table
> import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
> import org.apache.flink.api.scala._
> object test {
>   def main(args: Array[String]): Unit = {
> val env: StreamExecutionEnvironment = StreamExecutionEnvironment
>   .createLocalEnvironmentWithWebUI()
> val stream: DataStream[String] = env.readTextFile("data/wc.txt")
> val tableEnvironment: StreamTableEnvironment = 
> StreamTableEnvironment.create(env)
> val table: Table = tableEnvironment.fromDataStream(stream)
> tableEnvironment.createTemporaryView("wc", table)
> val res: Table = tableEnvironment.sqlQuery("select * from wc")
> tableEnvironment.toAppendStream[String](res).print()
> env.execute("test")
>   }
> }
> {code}
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
> h3. Summary
> We should improve the exception message a bit, we can throw the given type 
> (String) is not allowed in {{toAppendStream}}.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-22907) SQL Client queries fails on select statement

2021-06-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-22907.
---
Resolution: Not A Problem

> SQL Client queries fails on select statement
> 
>
> Key: FLINK-22907
> URL: https://issues.apache.org/jira/browse/FLINK-22907
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.0
> Environment: python 3.7.6
> JupyterLab
> apache-flink==1.13.0
>Reporter: Ryan Darling
>Priority: Major
> Attachments: flink_sql_issue1.JPG
>
>
> I have configured a Jupyter notebook to test flink jobs with the sql client. 
> All of my source / sink table creation statements are successful but we are 
> unable to query the created tables
> In this scenario we are attempting to pull data from a kafka topic into a 
> source table and if successful insert into a sink table and on to another 
> kafka topic. 
> We start the sql_client.sh passing the needed jar file locations 
> (flink-sql-connector-kafka_2.11-1.13.0.jar, 
> flink-table-planner_2.12-1.13.0.jar, flink-table-common-1.13.0.jar, 
> flink-sql-avro-confluent-registry-1.13.0.jar, 
> flink-table-planner-blink_2.12-1.13.0.jar)
> Next we create the source table and point to a kafka topic that we know has 
> avro data in it and registered schemas in the schema registry. 
> CREATE TABLE avro_sources ( 
>  prop_id INT,
>  check_in_dt STRING,
>  check_out_dt STRING,
>  los INT,
>  guests INT,
>  rate_amt INT
>  ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'avro_rate',
>  'properties.bootstrap.servers' = '',
>  'key.format' = 'avro-confluent',
>  'key.avro-confluent.schema-registry.url' = '',
>  'key.fields' = 'prop_id',
>  'value.format' = 'avro-confluent',
>  'value.avro-confluent.schema-registry.url' = '',
>  'value.fields-include' = 'ALL',
>  'key.avro-confluent.schema-registry.subject' = 'avro_rate',
>  'value.avro-confluent.schema-registry.subject' = 'avro_rate'
>  )
>  
> At this point I want to see the data that has been pulled into the source 
> table and I get the following error and are struggling to find a solution. I 
> feel this could be a bug 
> Flink SQL> select * from avro_sources;
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be 
> cast to org.codehaus.commons.compiler.ICompilerFactory
>  
> Any guidance on how I can resolve the bug or the problem would be 
> appreciated. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22907) SQL Client queries fails on select statement

2021-06-10 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361376#comment-17361376
 ] 

Jark Wu edited comment on FLINK-22907 at 6/11/21, 3:52 AM:
---

[~rdarling], please do not use {{--jar}} to load jars which are already in 
{{lib/}}. 

The {{--jar}} is only used for loading user jars which contains UDF, source, 
sinks. 

By default, Flink uses child-first classloader, so if you use {{--jar}} to load 
{{flink-table-blink_2.11-1.13.0.jar}}, then flink table classes (e.g. 
ICompilerFactory) maybe loaded by different classloaders, that's why it will 
throw above exception. 


was (Author: jark):
[~rdarling], please do not use {{--jar}} to load jars which are already in 
{{lib/}}. The {{--jar}} is only used for loading user jars which contains UDF, 
source, sinks. By default, Flink uses child-first classloader, so if you use 
{{--jar}} to load {{flink-table-blink_2.11-1.13.0.jar}}, then flink table 
classes (e.g. ICompilerFactory) maybe loaded by different classloaders, that's 
why it will throw above exception. 

> SQL Client queries fails on select statement
> 
>
> Key: FLINK-22907
> URL: https://issues.apache.org/jira/browse/FLINK-22907
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.0
> Environment: python 3.7.6
> JupyterLab
> apache-flink==1.13.0
>Reporter: Ryan Darling
>Priority: Major
> Attachments: flink_sql_issue1.JPG
>
>
> I have configured a Jupyter notebook to test flink jobs with the sql client. 
> All of my source / sink table creation statements are successful but we are 
> unable to query the created tables
> In this scenario we are attempting to pull data from a kafka topic into a 
> source table and if successful insert into a sink table and on to another 
> kafka topic. 
> We start the sql_client.sh passing the needed jar file locations 
> (flink-sql-connector-kafka_2.11-1.13.0.jar, 
> flink-table-planner_2.12-1.13.0.jar, flink-table-common-1.13.0.jar, 
> flink-sql-avro-confluent-registry-1.13.0.jar, 
> flink-table-planner-blink_2.12-1.13.0.jar)
> Next we create the source table and point to a kafka topic that we know has 
> avro data in it and registered schemas in the schema registry. 
> CREATE TABLE avro_sources ( 
>  prop_id INT,
>  check_in_dt STRING,
>  check_out_dt STRING,
>  los INT,
>  guests INT,
>  rate_amt INT
>  ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'avro_rate',
>  'properties.bootstrap.servers' = '',
>  'key.format' = 'avro-confluent',
>  'key.avro-confluent.schema-registry.url' = '',
>  'key.fields' = 'prop_id',
>  'value.format' = 'avro-confluent',
>  'value.avro-confluent.schema-registry.url' = '',
>  'value.fields-include' = 'ALL',
>  'key.avro-confluent.schema-registry.subject' = 'avro_rate',
>  'value.avro-confluent.schema-registry.subject' = 'avro_rate'
>  )
>  
> At this point I want to see the data that has been pulled into the source 
> table and I get the following error and are struggling to find a solution. I 
> feel this could be a bug 
> Flink SQL> select * from avro_sources;
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be 
> cast to org.codehaus.commons.compiler.ICompilerFactory
>  
> Any guidance on how I can resolve the bug or the problem would be 
> appreciated. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22907) SQL Client queries fails on select statement

2021-06-10 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361376#comment-17361376
 ] 

Jark Wu commented on FLINK-22907:
-

[~rdarling], please do not use {{--jar}} to load jars which are already in 
{{lib/}}. The {{--jar}} is only used for loading user jars which contains UDF, 
source, sinks. By default, Flink uses child-first classloader, so if you use 
{{--jar}} to load {{flink-table-blink_2.11-1.13.0.jar}}, then flink table 
classes (e.g. ICompilerFactory) maybe loaded by different classloaders, that's 
why it will throw above exception. 

> SQL Client queries fails on select statement
> 
>
> Key: FLINK-22907
> URL: https://issues.apache.org/jira/browse/FLINK-22907
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.0
> Environment: python 3.7.6
> JupyterLab
> apache-flink==1.13.0
>Reporter: Ryan Darling
>Priority: Major
> Attachments: flink_sql_issue1.JPG
>
>
> I have configured a Jupyter notebook to test flink jobs with the sql client. 
> All of my source / sink table creation statements are successful but we are 
> unable to query the created tables
> In this scenario we are attempting to pull data from a kafka topic into a 
> source table and if successful insert into a sink table and on to another 
> kafka topic. 
> We start the sql_client.sh passing the needed jar file locations 
> (flink-sql-connector-kafka_2.11-1.13.0.jar, 
> flink-table-planner_2.12-1.13.0.jar, flink-table-common-1.13.0.jar, 
> flink-sql-avro-confluent-registry-1.13.0.jar, 
> flink-table-planner-blink_2.12-1.13.0.jar)
> Next we create the source table and point to a kafka topic that we know has 
> avro data in it and registered schemas in the schema registry. 
> CREATE TABLE avro_sources ( 
>  prop_id INT,
>  check_in_dt STRING,
>  check_out_dt STRING,
>  los INT,
>  guests INT,
>  rate_amt INT
>  ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'avro_rate',
>  'properties.bootstrap.servers' = '',
>  'key.format' = 'avro-confluent',
>  'key.avro-confluent.schema-registry.url' = '',
>  'key.fields' = 'prop_id',
>  'value.format' = 'avro-confluent',
>  'value.avro-confluent.schema-registry.url' = '',
>  'value.fields-include' = 'ALL',
>  'key.avro-confluent.schema-registry.subject' = 'avro_rate',
>  'value.avro-confluent.schema-registry.subject' = 'avro_rate'
>  )
>  
> At this point I want to see the data that has been pulled into the source 
> table and I get the following error and are struggling to find a solution. I 
> feel this could be a bug 
> Flink SQL> select * from avro_sources;
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be 
> cast to org.codehaus.commons.compiler.ICompilerFactory
>  
> Any guidance on how I can resolve the bug or the problem would be 
> appreciated. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22968) String type cannot be used when creating Flink SQL

2021-06-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-22968:

Priority: Major  (was: Blocker)

>  String type cannot be used when creating Flink SQL
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Priority: Major
> Attachments: test.scala, this_error.txt
>
>
> {code:scala}
> package com.bytedance.one
> import org.apache.flink.streaming.api.scala.{DataStream, 
> StreamExecutionEnvironment}
> import org.apache.flink.table.api.Table
> import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
> import org.apache.flink.api.scala._
> object test {
>   def main(args: Array[String]): Unit = {
> val env: StreamExecutionEnvironment = StreamExecutionEnvironment
>   .createLocalEnvironmentWithWebUI()
> val stream: DataStream[String] = env.readTextFile("data/wc.txt")
> val tableEnvironment: StreamTableEnvironment = 
> StreamTableEnvironment.create(env)
> val table: Table = tableEnvironment.fromDataStream(stream)
> tableEnvironment.createTemporaryView("wc", table)
> val res: Table = tableEnvironment.sqlQuery("select * from wc")
> tableEnvironment.toAppendStream[String](res).print()
> env.execute("test")
>   }
> }
> {code}
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
> h3. Summary
> We should improve the exception message a bit, we can throw the given type 
> (String) is not allowed in {{toAppendStream}}.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22968) Improve exception message when using toAppendStream[String]

2021-06-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-22968:

Summary: Improve exception message when using toAppendStream[String]  (was: 
 String type cannot be used when creating Flink SQL)

> Improve exception message when using toAppendStream[String]
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Priority: Major
> Attachments: test.scala, this_error.txt
>
>
> {code:scala}
> package com.bytedance.one
> import org.apache.flink.streaming.api.scala.{DataStream, 
> StreamExecutionEnvironment}
> import org.apache.flink.table.api.Table
> import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
> import org.apache.flink.api.scala._
> object test {
>   def main(args: Array[String]): Unit = {
> val env: StreamExecutionEnvironment = StreamExecutionEnvironment
>   .createLocalEnvironmentWithWebUI()
> val stream: DataStream[String] = env.readTextFile("data/wc.txt")
> val tableEnvironment: StreamTableEnvironment = 
> StreamTableEnvironment.create(env)
> val table: Table = tableEnvironment.fromDataStream(stream)
> tableEnvironment.createTemporaryView("wc", table)
> val res: Table = tableEnvironment.sqlQuery("select * from wc")
> tableEnvironment.toAppendStream[String](res).print()
> env.execute("test")
>   }
> }
> {code}
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
> h3. Summary
> We should improve the exception message a bit, we can throw the given type 
> (String) is not allowed in {{toAppendStream}}.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22968) String type cannot be used when creating Flink SQL

2021-06-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-22968:

Description: 
{code:scala}

package com.bytedance.one

import org.apache.flink.streaming.api.scala.{DataStream, 
StreamExecutionEnvironment}
import org.apache.flink.table.api.Table
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
import org.apache.flink.api.scala._


object test {
  def main(args: Array[String]): Unit = {


val env: StreamExecutionEnvironment = StreamExecutionEnvironment
  .createLocalEnvironmentWithWebUI()

val stream: DataStream[String] = env.readTextFile("data/wc.txt")

val tableEnvironment: StreamTableEnvironment = 
StreamTableEnvironment.create(env)


val table: Table = tableEnvironment.fromDataStream(stream)

tableEnvironment.createTemporaryView("wc", table)

val res: Table = tableEnvironment.sqlQuery("select * from wc")

tableEnvironment.toAppendStream[String](res).print()


env.execute("test")
  }
}
{code}

When I run the program, the error is as follows:

The specific error is in this_ error.txt,

But the error prompts me to write an issue

Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.

It's very easy to read a stream and convert it into a table mode. The generic 
type is string

Then an error is reported. The code is in test. Scala and the error is in this_ 
error.txt

But I try to create an entity class by myself, and then assign generics to this 
entity class, and then I can pass it normally,

Or with row type, do you have to create your own entity class, not with string 
type?


h3. Summary

We should improve the exception message a bit, we can throw the given type 
(String) is not allowed in {{toAppendStream}}.

 

 

 

  was:
{code:scala}

package com.bytedance.one

import org.apache.flink.streaming.api.scala.{DataStream, 
StreamExecutionEnvironment}
import org.apache.flink.table.api.Table
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
import org.apache.flink.api.scala._


object test {
  def main(args: Array[String]): Unit = {


val env: StreamExecutionEnvironment = StreamExecutionEnvironment
  .createLocalEnvironmentWithWebUI()

val stream: DataStream[String] = env.readTextFile("data/wc.txt")

val tableEnvironment: StreamTableEnvironment = 
StreamTableEnvironment.create(env)


val table: Table = tableEnvironment.fromDataStream(stream)

tableEnvironment.createTemporaryView("wc", table)

val res: Table = tableEnvironment.sqlQuery("select * from wc")

tableEnvironment.toAppendStream[String](res).print()


env.execute("test")
  }
}
{code}

When I run the program, the error is as follows:

The specific error is in this_ error.txt,

But the error prompts me to write an issue

Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.

It's very easy to read a stream and convert it into a table mode. The generic 
type is string

Then an error is reported. The code is in test. Scala and the error is in this_ 
error.txt

But I try to create an entity class by myself, and then assign generics to this 
entity class, and then I can pass it normally,

Or with row type, do you have to create your own entity class, not with string 
type?


## Summary

We should improve the exception message a bit, we can throw the given type 
(String) is not allowed in {{toAppendStream}}.

 

 

 


>  String type cannot be used when creating Flink SQL
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Priority: Blocker
> Attachments: test.scala, this_error.txt
>
>
> {code:scala}
> package com.bytedance.one
> import org.apache.flink.streaming.api.scala.{DataStream, 
> StreamExecutionEnvironment}
> import org.apache.flink.table.api.Table
> import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
> import org.apache.flink.api.scala._
> object test {
>   def main(args: Array[String]): Unit = {
> val env: StreamExecutionEnvironment = StreamExecutionEnvironment
>   .createLocalEnvironmentWithWebUI()
> val stream: DataStream[String] = env.readTextFile("data/wc.txt")
> val tableEnvironment: StreamTableEnvironment = 
> StreamTableEnvironment.create(env)
> val table: Table = tableEnvironment.fromDataStream(stream)
> tableEnvironment.createTemporaryView("wc", table)
> val res: Table = tableEnvironment.sqlQuery("select * from wc")
> 

[jira] [Updated] (FLINK-22968) String type cannot be used when creating Flink SQL

2021-06-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-22968:

Description: 
{code:scala}

package com.bytedance.one

import org.apache.flink.streaming.api.scala.{DataStream, 
StreamExecutionEnvironment}
import org.apache.flink.table.api.Table
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
import org.apache.flink.api.scala._


object test {
  def main(args: Array[String]): Unit = {


val env: StreamExecutionEnvironment = StreamExecutionEnvironment
  .createLocalEnvironmentWithWebUI()

val stream: DataStream[String] = env.readTextFile("data/wc.txt")

val tableEnvironment: StreamTableEnvironment = 
StreamTableEnvironment.create(env)


val table: Table = tableEnvironment.fromDataStream(stream)

tableEnvironment.createTemporaryView("wc", table)

val res: Table = tableEnvironment.sqlQuery("select * from wc")

tableEnvironment.toAppendStream[String](res).print()


env.execute("test")
  }
}
{code}

When I run the program, the error is as follows:

The specific error is in this_ error.txt,

But the error prompts me to write an issue

Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.

It's very easy to read a stream and convert it into a table mode. The generic 
type is string

Then an error is reported. The code is in test. Scala and the error is in this_ 
error.txt

But I try to create an entity class by myself, and then assign generics to this 
entity class, and then I can pass it normally,

Or with row type, do you have to create your own entity class, not with string 
type?


## Summary

We should improve the exception message a bit, we can throw the given type 
(String) is not allowed in {{toAppendStream}}.

 

 

 

  was:
When I run the program, the error is as follows:

The specific error is in this_ error.txt,

But the error prompts me to write an issue

Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.

It's very easy to read a stream and convert it into a table mode. The generic 
type is string

Then an error is reported. The code is in test. Scala and the error is in this_ 
error.txt

But I try to create an entity class by myself, and then assign generics to this 
entity class, and then I can pass it normally,

Or with row type, do you have to create your own entity class, not with string 
type?

 

 

 


>  String type cannot be used when creating Flink SQL
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Priority: Blocker
> Attachments: test.scala, this_error.txt
>
>
> {code:scala}
> package com.bytedance.one
> import org.apache.flink.streaming.api.scala.{DataStream, 
> StreamExecutionEnvironment}
> import org.apache.flink.table.api.Table
> import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
> import org.apache.flink.api.scala._
> object test {
>   def main(args: Array[String]): Unit = {
> val env: StreamExecutionEnvironment = StreamExecutionEnvironment
>   .createLocalEnvironmentWithWebUI()
> val stream: DataStream[String] = env.readTextFile("data/wc.txt")
> val tableEnvironment: StreamTableEnvironment = 
> StreamTableEnvironment.create(env)
> val table: Table = tableEnvironment.fromDataStream(stream)
> tableEnvironment.createTemporaryView("wc", table)
> val res: Table = tableEnvironment.sqlQuery("select * from wc")
> tableEnvironment.toAppendStream[String](res).print()
> env.execute("test")
>   }
> }
> {code}
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
> ## Summary
> We should improve the exception message a bit, we can throw the given type 
> (String) is not allowed in {{toAppendStream}}.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22968) String type cannot be used when creating Flink SQL

2021-06-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-22968:

Docs Text:   (was: package com.bytedance.one

import org.apache.flink.streaming.api.scala.{DataStream, 
StreamExecutionEnvironment}
import org.apache.flink.table.api.Table
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
import org.apache.flink.api.scala._


object test {
  def main(args: Array[String]): Unit = {


val env: StreamExecutionEnvironment = StreamExecutionEnvironment
  .createLocalEnvironmentWithWebUI()

val stream: DataStream[String] = env.readTextFile("data/wc.txt")

val tableEnvironment: StreamTableEnvironment = 
StreamTableEnvironment.create(env)


val table: Table = tableEnvironment.fromDataStream(stream)

tableEnvironment.createTemporaryView("wc", table)

val res: Table = tableEnvironment.sqlQuery("select * from wc")

tableEnvironment.toAppendStream[String](res).print()


env.execute("test")
  }
}

)

>  String type cannot be used when creating Flink SQL
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Priority: Blocker
> Attachments: test.scala, this_error.txt
>
>
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22968) String type cannot be used when creating Flink SQL

2021-06-10 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361373#comment-17361373
 ] 

Jark Wu commented on FLINK-22968:
-

But I have to admit we should improve the exception message a bit. Will change 
the title and description. 

>  String type cannot be used when creating Flink SQL
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Priority: Blocker
> Attachments: test.scala, this_error.txt
>
>
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22968) String type cannot be used when creating Flink SQL

2021-06-10 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361370#comment-17361370
 ] 

Jark Wu commented on FLINK-22968:
-

Hi [~DaChun777], thanks for reporting this, but this is not a bug. Flink does 
support consuming DataStream with atomic type, e.g. String. However, 
{{toAppendStream()}} only support {{Row}} or POJO as result type. This is 
claimed in the Javadoc. 

{code}
  /**
* Converts the given [[Table]] into an append [[DataStream]] of a specified 
type.
*
* The [[Table]] must only have insert (append) changes. If the [[Table]] is 
also modified
* by update or delete changes, the conversion will fail.
*
* The fields of the [[Table]] are mapped to [[DataStream]] fields as 
follows:
* - [[Row]] and Scala Tuple types: Fields are mapped by position, field
* types must match.
* - POJO [[DataStream]] types: Fields are mapped by field name, field types 
must match.
*
* @param table The [[Table]] to convert.
* @tparam T The type of the resulting [[DataStream]].
* @return The converted [[DataStream]].
*/
  def toAppendStream[T: TypeInformation](table: Table): DataStream[T]
{code}

You can update your code to use Row as output type, 
{{tableEnvironment.toAppendStream[Row](res).print()}}, then the job should be 
able to run. 



>  String type cannot be used when creating Flink SQL
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Priority: Blocker
> Attachments: test.scala, this_error.txt
>
>
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-06-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-22969:

Component/s: Table SQL / Ecosystem
 Connectors / Kafka

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
> "payload 2"),
> changelogRow(
> "+I",
> 3L,
> "name 3",
> 
> LocalDateTime.parse("2020-03-10T13:12:11.123"),
> 102L,
> 43,
> "payload 3"),
> changelogRow(
> "-U",
> 2L,
> "name 2",
> 
> 

[jira] [Comment Edited] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-10 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361360#comment-17361360
 ] 

frank wang edited comment on FLINK-22735 at 6/11/21, 3:19 AM:
--

[~lirui] i check the code , and find in 
[FLINK-22890|https://issues.apache.org/jira/browse/FLINK-22890], the code have 
been merged to master that you modified, so that, i test the code is no problem


was (Author: frank wang):
[~lirui] i check the code , and find 
[FLINK-22890|https://issues.apache.org/jira/browse/FLINK-22890]

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22735) HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of times out

2021-06-10 Thread frank wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361360#comment-17361360
 ] 

frank wang commented on FLINK-22735:


[~lirui] i check the code , and find 
[FLINK-22890|https://issues.apache.org/jira/browse/FLINK-22890]

> HiveTableSourceITCase.testStreamPartitionReadByCreateTime failed because of 
> times out 
> --
>
> Key: FLINK-22735
> URL: https://issues.apache.org/jira/browse/FLINK-22735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Guowei Ma
>Assignee: frank wang
>Priority: Blocker
>  Labels: stale-blocker, test-stability
> Fix For: 1.14.0
>
> Attachments: wx20210610-222...@2x.png, wx20210610-222...@2x.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18205=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354=23726
> {code:java}
> May 20 22:22:26 [ERROR] Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 225.004 s <<< FAILURE! - in 
> org.apache.flink.connectors.hive.HiveTableSourceITCase
> May 20 22:22:26 [ERROR] 
> testStreamPartitionReadByCreateTime(org.apache.flink.connectors.hive.HiveTableSourceITCase)
>   Time elapsed: 120.182 s  <<< ERROR!
> May 20 22:22:26 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> May 20 22:22:26   at java.lang.Thread.sleep(Native Method)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> May 20 22:22:26   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> May 20 22:22:26   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.fetchRows(HiveTableSourceITCase.java:712)
> May 20 22:22:26   at 
> org.apache.flink.connectors.hive.HiveTableSourceITCase.testStreamPartitionReadByCreateTime(HiveTableSourceITCase.java:652)
> May 20 22:22:26   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 20 22:22:26   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 20 22:22:26   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 20 22:22:26   at java.lang.reflect.Method.invoke(Method.java:498)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 20 22:22:26   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 20 22:22:26   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> May 20 22:22:26   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> May 20 22:22:26   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> May 20 22:22:26   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22952) docs_404_check fail on azure due to ruby version not available

2021-06-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361358#comment-17361358
 ] 

Xintong Song commented on FLINK-22952:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18892=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43

> docs_404_check fail on azure due to ruby version not available
> --
>
> Key: FLINK-22952
> URL: https://issues.apache.org/jira/browse/FLINK-22952
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.12.4
>Reporter: Xintong Song
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.12.5, 1.13.2
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18852=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43=404fcc1b-71ae-54f6-61c8-430a6aeff2b5
> {code}
> Starting: UseRubyVersion
> ==
> Task : Use Ruby version
> Description  : Use the specified version of Ruby from the tool cache, 
> optionally adding it to the PATH
> Version  : 0.186.0
> Author   : Microsoft Corporation
> Help : 
> https://docs.microsoft.com/azure/devops/pipelines/tasks/tool/use-ruby-version
> ==
> ##[error]Version spec = 2.4 for architecture %25s did not match any version 
> in Agent.ToolsDirectory.
> Available versions: /opt/hostedtoolcache
> 2.5.9,2.6.7,2.7.3,3.0.1
> If this is a Microsoft-hosted agent, check that this image supports 
> side-by-side versions of Ruby at https://aka.ms/hosted-agent-software.
> If this is a self-hosted agent, see how to configure side-by-side Ruby 
> versions at https://go.microsoft.com/fwlink/?linkid=2005989.
> Finishing: UseRubyVersion
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22457) KafkaSourceLegacyITCase.testMultipleSourcesOnePartition fails because of timeout

2021-06-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361357#comment-17361357
 ] 

Xintong Song commented on FLINK-22457:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18892=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6935

> KafkaSourceLegacyITCase.testMultipleSourcesOnePartition fails because of 
> timeout
> 
>
> Key: FLINK-22457
> URL: https://issues.apache.org/jira/browse/FLINK-22457
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17140=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19=7045
> {code:java}
> Apr 24 23:47:33 [ERROR] Tests run: 21, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 174.335 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.KafkaSourceLegacyITCase
> Apr 24 23:47:33 [ERROR] 
> testMultipleSourcesOnePartition(org.apache.flink.connector.kafka.source.KafkaSourceLegacyITCase)
>   Time elapsed: 60.019 s  <<< ERROR!
> Apr 24 23:47:33 org.junit.runners.model.TestTimedOutException: test timed out 
> after 6 milliseconds
> Apr 24 23:47:33   at sun.misc.Unsafe.park(Native Method)
> Apr 24 23:47:33   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Apr 24 23:47:33   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> Apr 24 23:47:33   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> Apr 24 23:47:33   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> Apr 24 23:47:33   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Apr 24 23:47:33   at 
> org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:49)
> Apr 24 23:47:33   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runMultipleSourcesOnePartitionExactlyOnceTest(KafkaConsumerTestBase.java:1112)
> Apr 24 23:47:33   at 
> org.apache.flink.connector.kafka.source.KafkaSourceLegacyITCase.testMultipleSourcesOnePartition(KafkaSourceLegacyITCase.java:87)
> Apr 24 23:47:33   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Apr 24 23:47:33   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 24 23:47:33   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 24 23:47:33   at java.lang.reflect.Method.invoke(Method.java:498)
> Apr 24 23:47:33   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Apr 24 23:47:33   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 24 23:47:33   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Apr 24 23:47:33   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Apr 24 23:47:33   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 24 23:47:33   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> Apr 24 23:47:33   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Apr 24 23:47:33   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22387) UpsertKafkaTableITCase hangs when setting up kafka

2021-06-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361355#comment-17361355
 ] 

Xintong Song commented on FLINK-22387:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18892=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6974

> UpsertKafkaTableITCase hangs when setting up kafka
> --
>
> Key: FLINK-22387
> URL: https://issues.apache.org/jira/browse/FLINK-22387
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16901=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6932
> {code}
> 2021-04-20T20:01:32.2276988Z Apr 20 20:01:32 "main" #1 prio=5 os_prio=0 
> tid=0x7fe87400b000 nid=0x4028 runnable [0x7fe87df22000]
> 2021-04-20T20:01:32.2277666Z Apr 20 20:01:32java.lang.Thread.State: 
> RUNNABLE
> 2021-04-20T20:01:32.2278338Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okio.Buffer.getByte(Buffer.java:312)
> 2021-04-20T20:01:32.2279325Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.java:310)
> 2021-04-20T20:01:32.2280656Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.java:492)
> 2021-04-20T20:01:32.2281603Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.java:471)
> 2021-04-20T20:01:32.2282163Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.Util.skipAll(Util.java:204)
> 2021-04-20T20:01:32.2282870Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.Util.discard(Util.java:186)
> 2021-04-20T20:01:32.2283494Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.close(Http1ExchangeCodec.java:511)
> 2021-04-20T20:01:32.2284460Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okio.ForwardingSource.close(ForwardingSource.java:43)
> 2021-04-20T20:01:32.2285183Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.connection.Exchange$ResponseBodySource.close(Exchange.java:313)
> 2021-04-20T20:01:32.2285756Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okio.RealBufferedSource.close(RealBufferedSource.java:476)
> 2021-04-20T20:01:32.2286287Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.internal.Util.closeQuietly(Util.java:139)
> 2021-04-20T20:01:32.2286795Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.ResponseBody.close(ResponseBody.java:192)
> 2021-04-20T20:01:32.2287270Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.okhttp3.Response.close(Response.java:290)
> 2021-04-20T20:01:32.2287913Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient$OkResponse.close(OkDockerHttpClient.java:285)
> 2021-04-20T20:01:32.2288606Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$null$0(DefaultInvocationBuilder.java:272)
> 2021-04-20T20:01:32.2289295Z Apr 20 20:01:32  at 
> org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$$Lambda$340/2058508175.close(Unknown
>  Source)
> 2021-04-20T20:01:32.2289886Z Apr 20 20:01:32  at 
> com.github.dockerjava.api.async.ResultCallbackTemplate.close(ResultCallbackTemplate.java:77)
> 2021-04-20T20:01:32.2290567Z Apr 20 20:01:32  at 
> org.testcontainers.utility.ResourceReaper.start(ResourceReaper.java:202)
> 2021-04-20T20:01:32.2291051Z Apr 20 20:01:32  at 
> org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:205)
> 2021-04-20T20:01:32.2291879Z Apr 20 20:01:32  - locked <0xe9cd50f8> 
> (a [Ljava.lang.Object;)
> 2021-04-20T20:01:32.2292313Z Apr 20 20:01:32  at 
> org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
> 2021-04-20T20:01:32.2292870Z Apr 20 20:01:32  at 
> org.testcontainers.LazyDockerClient.authConfig(LazyDockerClient.java:12)
> 2021-04-20T20:01:32.2293383Z Apr 20 20:01:32  at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:310)
> 2021-04-20T20:01:32.2293890Z Apr 20 20:01:32  at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1029)
> 2021-04-20T20:01:32.2294578Z Apr 20 20:01:32  at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
> 2021-04-20T20:01:32.2295157Z Apr 20 20:01:32  at 
> 

[jira] [Commented] (FLINK-22963) The description of taskmanager.memory.task.heap.size in the official document is incorrect

2021-06-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361354#comment-17361354
 ] 

Xintong Song commented on FLINK-22963:
--

[~jasonlee1017], thanks for reporting this issue and volunteering to fix it. 
I've assigned you to the ticket.

According to the [release update 
policy|https://flink.apache.org/downloads.html#update-policy-for-old-releases], 
the community provides bugfixes only for the latest release and the one before 
it. That means this should be fixed for 1.12, 1.13 and of course the upcoming 
1.14 releases.

Usually, the contributors only need to open PRs against the master branch and 
the committers will port the changes to the old release branches if needed. For 
this ticket, we probably need an additional PR against 1.12, because we have 
switched our document framework from Ruby to Hugo since1.13.

> The description of taskmanager.memory.task.heap.size in the official document 
> is incorrect
> --
>
> Key: FLINK-22963
> URL: https://issues.apache.org/jira/browse/FLINK-22963
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: JasonLee
>Assignee: JasonLee
>Priority: Major
>  Labels: documentation, starter
>
> When I studied the memory model of TaskManager, I found that there is a 
> problem in the official document, which is the description of 
> taskmanager.memory.task.heap.size is incorrect.
> According to the official memory model, I think the correct description 
> should be that task Heap Memory size for TaskExecutors. This is the size of 
> JVM heap memory reserved for tasks. If not specified, it will be derived as 
> Total Flink Memory minus Framework Heap Memory, Framework Off-Heap Heap 
> Memory, Task Off-Heap Memory, Managed Memory and Network Memory.
> However, in the official document, the Framework Off-Heap Heap Memory should 
> be subtracted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-22963) The description of taskmanager.memory.task.heap.size in the official document is incorrect

2021-06-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reassigned FLINK-22963:


Assignee: JasonLee

> The description of taskmanager.memory.task.heap.size in the official document 
> is incorrect
> --
>
> Key: FLINK-22963
> URL: https://issues.apache.org/jira/browse/FLINK-22963
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: JasonLee
>Assignee: JasonLee
>Priority: Major
>  Labels: documentation, starter
>
> When I studied the memory model of TaskManager, I found that there is a 
> problem in the official document, which is the description of 
> taskmanager.memory.task.heap.size is incorrect.
> According to the official memory model, I think the correct description 
> should be that task Heap Memory size for TaskExecutors. This is the size of 
> JVM heap memory reserved for tasks. If not specified, it will be derived as 
> Total Flink Memory minus Framework Heap Memory, Framework Off-Heap Heap 
> Memory, Task Off-Heap Memory, Managed Memory and Network Memory.
> However, in the official document, the Framework Off-Heap Heap Memory should 
> be subtracted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-06-10 Thread Shengkai Fang (Jira)
Shengkai Fang created FLINK-22969:
-

 Summary: Validate the topic is not null or empty string when 
create kafka source/sink function 
 Key: FLINK-22969
 URL: https://issues.apache.org/jira/browse/FLINK-22969
 Project: Flink
  Issue Type: Bug
Reporter: Shengkai Fang


Add test in UpsertKafkaTableITCase


{code:java}
 @Test
public void testSourceSinkWithKeyAndPartialValue() throws Exception {
// we always use a different topic name for each parameterized topic,
// in order to make sure the topic can be created.
final String topic = "key_partial_value_topic_" + format;
createTestTopic(topic, 1, 1); // use single partition to guarantee 
orders in tests

// -- Produce an event time stream into Kafka 
---
String bootstraps = standardProps.getProperty("bootstrap.servers");

// k_user_id and user_id have different data types to verify the 
correct mapping,
// fields are reordered on purpose
final String createTable =
String.format(
"CREATE TABLE upsert_kafka (\n"
+ "  `k_user_id` BIGINT,\n"
+ "  `name` STRING,\n"
+ "  `timestamp` TIMESTAMP(3) METADATA,\n"
+ "  `k_event_id` BIGINT,\n"
+ "  `user_id` INT,\n"
+ "  `payload` STRING,\n"
+ "  PRIMARY KEY (k_event_id, k_user_id) NOT 
ENFORCED"
+ ") WITH (\n"
+ "  'connector' = 'upsert-kafka',\n"
+ "  'topic' = '%s',\n"
+ "  'properties.bootstrap.servers' = '%s',\n"
+ "  'key.format' = '%s',\n"
+ "  'key.fields-prefix' = 'k_',\n"
+ "  'value.format' = '%s',\n"
+ "  'value.fields-include' = 'EXCEPT_KEY'\n"
+ ")",
"", bootstraps, format, format);

tEnv.executeSql(createTable);

String initialValues =
"INSERT INTO upsert_kafka\n"
+ "VALUES\n"
+ " (1, 'name 1', TIMESTAMP '2020-03-08 13:12:11.123', 
100, 41, 'payload 1'),\n"
+ " (2, 'name 2', TIMESTAMP '2020-03-09 13:12:11.123', 
101, 42, 'payload 2'),\n"
+ " (3, 'name 3', TIMESTAMP '2020-03-10 13:12:11.123', 
102, 43, 'payload 3'),\n"
+ " (2, 'name 2', TIMESTAMP '2020-03-11 13:12:11.123', 
101, 42, 'payload')";
tEnv.executeSql(initialValues).await();

// -- Consume stream from Kafka ---

final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
upsert_kafka"), 5);

final List expected =
Arrays.asList(
changelogRow(
"+I",
1L,
"name 1",
LocalDateTime.parse("2020-03-08T13:12:11.123"),
100L,
41,
"payload 1"),
changelogRow(
"+I",
2L,
"name 2",
LocalDateTime.parse("2020-03-09T13:12:11.123"),
101L,
42,
"payload 2"),
changelogRow(
"+I",
3L,
"name 3",
LocalDateTime.parse("2020-03-10T13:12:11.123"),
102L,
43,
"payload 3"),
changelogRow(
"-U",
2L,
"name 2",
LocalDateTime.parse("2020-03-09T13:12:11.123"),
101L,
42,
"payload 2"),
changelogRow(
"+U",
2L,
"name 2",
LocalDateTime.parse("2020-03-11T13:12:11.123"),
101L,
42,
"payload"));

assertThat(result, deepEqualTo(expected, true));

// - cleanup 

[jira] [Updated] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-06-10 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-22969:
--
Issue Type: Improvement  (was: Bug)

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>Reporter: Shengkai Fang
>Priority: Major
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
> "payload 2"),
> changelogRow(
> "+I",
> 3L,
> "name 3",
> 
> LocalDateTime.parse("2020-03-10T13:12:11.123"),
> 102L,
> 43,
> "payload 3"),
> changelogRow(
> "-U",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
> 

[jira] [Updated] (FLINK-22969) Validate the topic is not null or empty string when create kafka source/sink function

2021-06-10 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-22969:
--
Affects Version/s: 1.14.0

> Validate the topic is not null or empty string when create kafka source/sink 
> function 
> --
>
> Key: FLINK-22969
> URL: https://issues.apache.org/jira/browse/FLINK-22969
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.14.0
>Reporter: Shengkai Fang
>Priority: Major
>
> Add test in UpsertKafkaTableITCase
> {code:java}
>  @Test
> public void testSourceSinkWithKeyAndPartialValue() throws Exception {
> // we always use a different topic name for each parameterized topic,
> // in order to make sure the topic can be created.
> final String topic = "key_partial_value_topic_" + format;
> createTestTopic(topic, 1, 1); // use single partition to guarantee 
> orders in tests
> // -- Produce an event time stream into Kafka 
> ---
> String bootstraps = standardProps.getProperty("bootstrap.servers");
> // k_user_id and user_id have different data types to verify the 
> correct mapping,
> // fields are reordered on purpose
> final String createTable =
> String.format(
> "CREATE TABLE upsert_kafka (\n"
> + "  `k_user_id` BIGINT,\n"
> + "  `name` STRING,\n"
> + "  `timestamp` TIMESTAMP(3) METADATA,\n"
> + "  `k_event_id` BIGINT,\n"
> + "  `user_id` INT,\n"
> + "  `payload` STRING,\n"
> + "  PRIMARY KEY (k_event_id, k_user_id) NOT 
> ENFORCED"
> + ") WITH (\n"
> + "  'connector' = 'upsert-kafka',\n"
> + "  'topic' = '%s',\n"
> + "  'properties.bootstrap.servers' = '%s',\n"
> + "  'key.format' = '%s',\n"
> + "  'key.fields-prefix' = 'k_',\n"
> + "  'value.format' = '%s',\n"
> + "  'value.fields-include' = 'EXCEPT_KEY'\n"
> + ")",
> "", bootstraps, format, format);
> tEnv.executeSql(createTable);
> String initialValues =
> "INSERT INTO upsert_kafka\n"
> + "VALUES\n"
> + " (1, 'name 1', TIMESTAMP '2020-03-08 
> 13:12:11.123', 100, 41, 'payload 1'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-09 
> 13:12:11.123', 101, 42, 'payload 2'),\n"
> + " (3, 'name 3', TIMESTAMP '2020-03-10 
> 13:12:11.123', 102, 43, 'payload 3'),\n"
> + " (2, 'name 2', TIMESTAMP '2020-03-11 
> 13:12:11.123', 101, 42, 'payload')";
> tEnv.executeSql(initialValues).await();
> // -- Consume stream from Kafka ---
> final List result = collectRows(tEnv.sqlQuery("SELECT * FROM 
> upsert_kafka"), 5);
> final List expected =
> Arrays.asList(
> changelogRow(
> "+I",
> 1L,
> "name 1",
> 
> LocalDateTime.parse("2020-03-08T13:12:11.123"),
> 100L,
> 41,
> "payload 1"),
> changelogRow(
> "+I",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
> "payload 2"),
> changelogRow(
> "+I",
> 3L,
> "name 3",
> 
> LocalDateTime.parse("2020-03-10T13:12:11.123"),
> 102L,
> 43,
> "payload 3"),
> changelogRow(
> "-U",
> 2L,
> "name 2",
> 
> LocalDateTime.parse("2020-03-09T13:12:11.123"),
> 101L,
> 42,
>

[jira] [Commented] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs

2021-06-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361350#comment-17361350
 ] 

Xintong Song commented on FLINK-20329:
--

Thanks [~karmagyz]. I cannot come up with any other better idea. I've assigned 
you to the ticket. Please go ahead.

> Elasticsearch7DynamicSinkITCase hangs
> -
>
> Key: FLINK-20329
> URL: https://issues.apache.org/jira/browse/FLINK-20329
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Dian Fu
>Assignee: Yangze Guo
>Priority: Major
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-11-24T16:04:05.9260517Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2020-11-24T16:19:25.5481231Z 
> ==
> 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds.
> 2020-11-24T16:19:25.5484064Z 
> ==
> 2020-11-24T16:19:25.5484498Z 
> ==
> 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS)
> 2020-11-24T16:19:25.5485475Z 
> ==
> 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar
> 2020-11-24T16:19:25.7263515Z 18566 Jps
> 2020-11-24T16:19:25.7263709Z 959 Launcher
> 2020-11-24T16:19:25.7411148Z 
> ==
> 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192
> 2020-11-24T16:19:25.7427369Z 
> ==
> 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26
> 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.275-b01 mixed mode):
> 2020-11-24T16:19:26.0849831Z 
> 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 
> tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x]
> 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE
> 2020-11-24T16:19:26.0850814Z 
> 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 
> os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() 
> [0x7fc1012c4000]
> 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on 
> object monitor)
> 2020-11-24T16:19:26.0855379Z  at java.lang.Object.wait(Native Method)
> 2020-11-24T16:19:26.0855844Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142)
> 2020-11-24T16:19:26.0857272Z  - locked <0x8e2bd2d0> (a 
> java.util.ArrayList)
> 2020-11-24T16:19:26.0857977Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0858471Z  at 
> org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
> 2020-11-24T16:19:26.0858961Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133)
> 2020-11-24T16:19:26.0859422Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0859788Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-24T16:19:26.0860030Z 
> 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000]
> 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-11-24T16:19:26.0861387Z  at sun.misc.Unsafe.park(Native Method)
> 2020-11-24T16:19:26.0862495Z  - parking to wait for  <0x8814bf30> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-11-24T16:19:26.0863253Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 2020-11-24T16:19:26.0863760Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
> 2020-11-24T16:19:26.0864274Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
> 2020-11-24T16:19:26.0864762Z  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
> 2020-11-24T16:19:26.0865299Z  at 
> 

[jira] [Assigned] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs

2021-06-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reassigned FLINK-20329:


Assignee: Yangze Guo

> Elasticsearch7DynamicSinkITCase hangs
> -
>
> Key: FLINK-20329
> URL: https://issues.apache.org/jira/browse/FLINK-20329
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Dian Fu
>Assignee: Yangze Guo
>Priority: Major
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-11-24T16:04:05.9260517Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2020-11-24T16:19:25.5481231Z 
> ==
> 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds.
> 2020-11-24T16:19:25.5484064Z 
> ==
> 2020-11-24T16:19:25.5484498Z 
> ==
> 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS)
> 2020-11-24T16:19:25.5485475Z 
> ==
> 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar
> 2020-11-24T16:19:25.7263515Z 18566 Jps
> 2020-11-24T16:19:25.7263709Z 959 Launcher
> 2020-11-24T16:19:25.7411148Z 
> ==
> 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192
> 2020-11-24T16:19:25.7427369Z 
> ==
> 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26
> 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.275-b01 mixed mode):
> 2020-11-24T16:19:26.0849831Z 
> 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 
> tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x]
> 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE
> 2020-11-24T16:19:26.0850814Z 
> 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 
> os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() 
> [0x7fc1012c4000]
> 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on 
> object monitor)
> 2020-11-24T16:19:26.0855379Z  at java.lang.Object.wait(Native Method)
> 2020-11-24T16:19:26.0855844Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142)
> 2020-11-24T16:19:26.0857272Z  - locked <0x8e2bd2d0> (a 
> java.util.ArrayList)
> 2020-11-24T16:19:26.0857977Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0858471Z  at 
> org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
> 2020-11-24T16:19:26.0858961Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133)
> 2020-11-24T16:19:26.0859422Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0859788Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-24T16:19:26.0860030Z 
> 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000]
> 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-11-24T16:19:26.0861387Z  at sun.misc.Unsafe.park(Native Method)
> 2020-11-24T16:19:26.0862495Z  - parking to wait for  <0x8814bf30> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-11-24T16:19:26.0863253Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 2020-11-24T16:19:26.0863760Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
> 2020-11-24T16:19:26.0864274Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
> 2020-11-24T16:19:26.0864762Z  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
> 2020-11-24T16:19:26.0865299Z  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
> 2020-11-24T16:19:26.0866000Z  at 
> 

[jira] [Updated] (FLINK-22627) Remove SlotManagerImpl

2021-06-10 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo updated FLINK-22627:
---
Labels: pull-request-available  (was: pull-request-available stale-assigned)

> Remove SlotManagerImpl
> --
>
> Key: FLINK-22627
> URL: https://issues.apache.org/jira/browse/FLINK-22627
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> As the declarative resource management is completed (FLINK-10404) and the old 
> {{SlotPoolImpl}} is removed in FLINK-22477, it's time to remove the 
> {{SlotManagerImpl}} and
>  all related classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22625) FileSinkMigrationITCase unstable

2021-06-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361349#comment-17361349
 ] 

Xintong Song commented on FLINK-22625:
--

Thank, [~gaoyunhaii]. I've assigned you to the ticket.

> FileSinkMigrationITCase unstable
> 
>
> Key: FLINK-22625
> URL: https://issues.apache.org/jira/browse/FLINK-22625
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Assignee: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17817=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=420bd9ec-164e-562e-8947-0dacde3cec91=22179
> {code}
> May 11 00:43:40 Caused by: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Sink: Unnamed (1/3) of job 733a4777cca170f86724832642e2a8b1 
> has not being executed at the moment. Aborting checkpoint. Failure reason: 
> Not all required tasks are currently running.
> May 11 00:43:40   at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152)
> May 11 00:43:40   at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114)
> May 11 00:43:40   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> May 11 00:43:40   at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> May 11 00:43:40   at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> May 11 00:43:40   at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> May 11 00:43:40   at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> May 11 00:43:40   at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> May 11 00:43:40   at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> May 11 00:43:40   at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> May 11 00:43:40   at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-22625) FileSinkMigrationITCase unstable

2021-06-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reassigned FLINK-22625:


Assignee: Yun Gao

> FileSinkMigrationITCase unstable
> 
>
> Key: FLINK-22625
> URL: https://issues.apache.org/jira/browse/FLINK-22625
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Assignee: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17817=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=420bd9ec-164e-562e-8947-0dacde3cec91=22179
> {code}
> May 11 00:43:40 Caused by: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Sink: Unnamed (1/3) of job 733a4777cca170f86724832642e2a8b1 
> has not being executed at the moment. Aborting checkpoint. Failure reason: 
> Not all required tasks are currently running.
> May 11 00:43:40   at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152)
> May 11 00:43:40   at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114)
> May 11 00:43:40   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> May 11 00:43:40   at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> May 11 00:43:40   at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> May 11 00:43:40   at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> May 11 00:43:40   at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> May 11 00:43:40   at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> May 11 00:43:40   at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> May 11 00:43:40   at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> May 11 00:43:40   at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-22963) The description of taskmanager.memory.task.heap.size in the official document is incorrect

2021-06-10 Thread JasonLee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JasonLee updated FLINK-22963:
-
Comment: was deleted

(was: [~xintongsong] 

Thank you for your confirmation and changes to the issue.

When I checked the source code in multiple versions, I found that the 
description of TASK_HEAP_MEMORY in the 
org.apache.flink.configuration.TaskManagerOptions class is also wrong. For 
example, Flink 1.10. So in addition to the version you specified, do other 
affected versions need to be repaired?

The source code and documentation need to fix. If possible, I can fix this 
issue for the community.)

> The description of taskmanager.memory.task.heap.size in the official document 
> is incorrect
> --
>
> Key: FLINK-22963
> URL: https://issues.apache.org/jira/browse/FLINK-22963
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: JasonLee
>Priority: Major
>  Labels: documentation, starter
>
> When I studied the memory model of TaskManager, I found that there is a 
> problem in the official document, which is the description of 
> taskmanager.memory.task.heap.size is incorrect.
> According to the official memory model, I think the correct description 
> should be that task Heap Memory size for TaskExecutors. This is the size of 
> JVM heap memory reserved for tasks. If not specified, it will be derived as 
> Total Flink Memory minus Framework Heap Memory, Framework Off-Heap Heap 
> Memory, Task Off-Heap Memory, Managed Memory and Network Memory.
> However, in the official document, the Framework Off-Heap Heap Memory should 
> be subtracted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22963) The description of taskmanager.memory.task.heap.size in the official document is incorrect

2021-06-10 Thread JasonLee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361346#comment-17361346
 ] 

JasonLee commented on FLINK-22963:
--

[~xintongsong] 

Thank you for your confirmation and changes to the issue.

When I checked the source code in multiple versions, I found that the 
description of TASK_HEAP_MEMORY in the 
org.apache.flink.configuration.TaskManagerOptions class is also wrong. For 
example, Flink 1.10. So in addition to the version you specified, do other 
affected versions need to be repaired?

The source code and documentation need to fix. If possible, I can fix this 
issue for the community.

> The description of taskmanager.memory.task.heap.size in the official document 
> is incorrect
> --
>
> Key: FLINK-22963
> URL: https://issues.apache.org/jira/browse/FLINK-22963
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: JasonLee
>Priority: Major
>  Labels: documentation, starter
>
> When I studied the memory model of TaskManager, I found that there is a 
> problem in the official document, which is the description of 
> taskmanager.memory.task.heap.size is incorrect.
> According to the official memory model, I think the correct description 
> should be that task Heap Memory size for TaskExecutors. This is the size of 
> JVM heap memory reserved for tasks. If not specified, it will be derived as 
> Total Flink Memory minus Framework Heap Memory, Framework Off-Heap Heap 
> Memory, Task Off-Heap Memory, Managed Memory and Network Memory.
> However, in the official document, the Framework Off-Heap Heap Memory should 
> be subtracted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22933) Upgrade the Flink Fabric8io/kubernetes-client version to >=5.4.0 to be FIPS compliant

2021-06-10 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361345#comment-17361345
 ] 

Yang Wang commented on FLINK-22933:
---

[~Fuyao Li] Maybe I do not make myself clear.

IIRC, we could not have two same dependencies with different versions in maven. 
So what we need to do is build a shaded dependency. For example, we could have 
another module, which named "shaded-flink-kubernetes", it just needs to add the 
{{flink-kubernetes}} as dependency and then relocate the 
{{io.fabric8.kubernetes.client}} to another pattern(e.g. 
{{org.apache.flink.shaded.io.fabric8.kubernetes.client}}). After then, the 
k8s-operator could depend on the "shaded-flink-kubernetes", instead of 
"flink-kubernetes".

Of cause, now you could also have another different version of fabric8 
kubernetes-client.

> Upgrade the Flink Fabric8io/kubernetes-client version to >=5.4.0 to be FIPS 
> compliant
> -
>
> Key: FLINK-22933
> URL: https://issues.apache.org/jira/browse/FLINK-22933
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.13.1
>Reporter: Fuyao Li
>Priority: Critical
> Fix For: 1.14.0
>
> Attachments: pom.xml
>
>
> The current Fabric8io version in Flink is 4.9.2
> See link: 
> [https://github.com/apache/flink/blob/master/flink-kubernetes/pom.xml#L35]
> This version of Fabric8io library is not FIPS compliant 
> ([https://www.sdxcentral.com/security/definitions/what-does-mean-fips-compliant/).]
> Such function is added in Fabric8io recently. See links below.
> [https://github.com/fabric8io/kubernetes-client/pull/2788]
>  [https://github.com/fabric8io/kubernetes-client/issues/2732]
>  
> I am trying to write a native kubernetes operator leveraging APIs and 
> interfaces provided by Flink source code. For example, ApplicationDeployer.
> I am writing my own implementation based on Yang's example code: 
> [https://github.com/wangyang0918/flink-native-k8s-operator]
>  
> Using version 4.9.2 for my operator will be working perfectly, but it could 
> cause FIPS compliant issues.
>  
> Using version 5.4.0 will run into issues since Fabric8io version 4 and 
> version 5 API is not that compatible. I saw errors below.
> {code:java}
> Exception in thread "main" java.lang.AbstractMethodError: Receiver class 
> io.fabric8.kubernetes.client.handlers.ServiceHandler does not define or 
> inherit an implementation of the resolved method 'abstract java.lang.Object 
> create(okhttp3.OkHttpClient, io.fabric8.kubernetes.client.Config, 
> java.lang.String, java.lang.Object, boolean)' of interface 
> io.fabric8.kubernetes.client.ResourceHandler.Exception in thread "main" 
> java.lang.AbstractMethodError: Receiver class 
> io.fabric8.kubernetes.client.handlers.ServiceHandler does not define or 
> inherit an implementation of the resolved method 'abstract java.lang.Object 
> create(okhttp3.OkHttpClient, io.fabric8.kubernetes.client.Config, 
> java.lang.String, java.lang.Object, boolean)' of interface 
> io.fabric8.kubernetes.client.ResourceHandler. at 
> io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.lambda$createOrReplaceItem$0(CreateOrReplaceHelper.java:77)
>  at 
> io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.createOrReplace(CreateOrReplaceHelper.java:56)
>  at 
> io.fabric8.kubernetes.client.utils.CreateOrReplaceHelper.createOrReplaceItem(CreateOrReplaceHelper.java:91)
>  at 
> io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplaceOrDeleteExisting(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:454)
>  at 
> io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:297)
>  at 
> io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:66)
>  at 
> org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.createJobManagerComponent(Fabric8FlinkKubeClient.java:113)
>  at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployClusterInternal(KubernetesClusterDescriptor.java:274)
>  at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:208)
>  at 
> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)
>  at 
> org.apache.flink.kubernetes.operator.controller.FlinkApplicationController.reconcile(FlinkApplicationController.java:207)
>  at 
> 

[jira] [Commented] (FLINK-22625) FileSinkMigrationITCase unstable

2021-06-10 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361343#comment-17361343
 ] 

Yun Gao commented on FLINK-22625:
-

[~xintongsong] sure, I'll have a look.

> FileSinkMigrationITCase unstable
> 
>
> Key: FLINK-22625
> URL: https://issues.apache.org/jira/browse/FLINK-22625
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17817=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=420bd9ec-164e-562e-8947-0dacde3cec91=22179
> {code}
> May 11 00:43:40 Caused by: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Sink: Unnamed (1/3) of job 733a4777cca170f86724832642e2a8b1 
> has not being executed at the moment. Aborting checkpoint. Failure reason: 
> Not all required tasks are currently running.
> May 11 00:43:40   at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152)
> May 11 00:43:40   at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114)
> May 11 00:43:40   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> May 11 00:43:40   at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> May 11 00:43:40   at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> May 11 00:43:40   at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> May 11 00:43:40   at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> May 11 00:43:40   at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> May 11 00:43:40   at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> May 11 00:43:40   at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> May 11 00:43:40   at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs

2021-06-10 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361342#comment-17361342
 ] 

Yangze Guo commented on FLINK-20329:


Maybe we can temporarily change the logger devel temporarily to DEBUG to get a 
deeper insight into the hanging thread. WDYT?

> Elasticsearch7DynamicSinkITCase hangs
> -
>
> Key: FLINK-20329
> URL: https://issues.apache.org/jira/browse/FLINK-20329
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-11-24T16:04:05.9260517Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2020-11-24T16:19:25.5481231Z 
> ==
> 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds.
> 2020-11-24T16:19:25.5484064Z 
> ==
> 2020-11-24T16:19:25.5484498Z 
> ==
> 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS)
> 2020-11-24T16:19:25.5485475Z 
> ==
> 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar
> 2020-11-24T16:19:25.7263515Z 18566 Jps
> 2020-11-24T16:19:25.7263709Z 959 Launcher
> 2020-11-24T16:19:25.7411148Z 
> ==
> 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192
> 2020-11-24T16:19:25.7427369Z 
> ==
> 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26
> 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.275-b01 mixed mode):
> 2020-11-24T16:19:26.0849831Z 
> 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 
> tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x]
> 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE
> 2020-11-24T16:19:26.0850814Z 
> 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 
> os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() 
> [0x7fc1012c4000]
> 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on 
> object monitor)
> 2020-11-24T16:19:26.0855379Z  at java.lang.Object.wait(Native Method)
> 2020-11-24T16:19:26.0855844Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142)
> 2020-11-24T16:19:26.0857272Z  - locked <0x8e2bd2d0> (a 
> java.util.ArrayList)
> 2020-11-24T16:19:26.0857977Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0858471Z  at 
> org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
> 2020-11-24T16:19:26.0858961Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133)
> 2020-11-24T16:19:26.0859422Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0859788Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-24T16:19:26.0860030Z 
> 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000]
> 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-11-24T16:19:26.0861387Z  at sun.misc.Unsafe.park(Native Method)
> 2020-11-24T16:19:26.0862495Z  - parking to wait for  <0x8814bf30> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-11-24T16:19:26.0863253Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 2020-11-24T16:19:26.0863760Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
> 2020-11-24T16:19:26.0864274Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
> 2020-11-24T16:19:26.0864762Z  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
> 2020-11-24T16:19:26.0865299Z  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
> 

[jira] [Commented] (FLINK-22963) The description of taskmanager.memory.task.heap.size in the official document is incorrect

2021-06-10 Thread JasonLee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361341#comment-17361341
 ] 

JasonLee commented on FLINK-22963:
--

Hi Xintong


Thank you for your confirmation and changes to the issue.


When I checked the source code in multiple versions, I found that the 
description of TASK_HEAP_MEMORY in the 
org.apache.flink.configuration.TaskManagerOptions class is also wrong. For 
example, Flink 1.10. 


So in addition to the version you specified, do other affected versions need to 
be repaired? so it can Modify the source code and documentation. If possible, I 
can fix this issue for the community.


Best,
Jason
JasonLee
jasonlee1...@163.com
签名由网易邮箱大师定制


在2021年06月11日 09:53,Xintong Song (Jira) 写道:

[ 
https://issues.apache.org/jira/browse/FLINK-22963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song 更新了 FLINK-22963:
-
影响版本: (原值: 1.13.0)
(原值: 1.12.0)
(原值: 1.10.0)
1.14.0
1.13.1
1.12.4

The description of taskmanager.memory.task.heap.size in the official document 
is incorrect
--

关键字: FLINK-22963
URL: https://issues.apache.org/jira/browse/FLINK-22963
项目: Flink
问题类型: 故障
模块: Documentation
影响版本: 1.14.0, 1.13.1, 1.12.4
报告人: JasonLee
优先级: 重要
标签: documentation

When I studied the memory model of TaskManager, I found that there is a problem 
in the official document, which is the description of 
taskmanager.memory.task.heap.size is incorrect.
According to the official memory model, I think the correct description should 
be that task Heap Memory size for TaskExecutors. This is the size of JVM heap 
memory reserved for tasks. If not specified, it will be derived as Total Flink 
Memory minus Framework Heap Memory, Framework Off-Heap Heap Memory, Task 
Off-Heap Memory, Managed Memory and Network Memory.
However, in the official document, the Framework Off-Heap Heap Memory should be 
subtracted.



--
这条信息是由Atlassian Jira发送的
(v8.3.4#803005)


> The description of taskmanager.memory.task.heap.size in the official document 
> is incorrect
> --
>
> Key: FLINK-22963
> URL: https://issues.apache.org/jira/browse/FLINK-22963
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: JasonLee
>Priority: Major
>  Labels: documentation, starter
>
> When I studied the memory model of TaskManager, I found that there is a 
> problem in the official document, which is the description of 
> taskmanager.memory.task.heap.size is incorrect.
> According to the official memory model, I think the correct description 
> should be that task Heap Memory size for TaskExecutors. This is the size of 
> JVM heap memory reserved for tasks. If not specified, it will be derived as 
> Total Flink Memory minus Framework Heap Memory, Framework Off-Heap Heap 
> Memory, Task Off-Heap Memory, Managed Memory and Network Memory.
> However, in the official document, the Framework Off-Heap Heap Memory should 
> be subtracted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22946) Network buffer deadlock introduced by unaligned checkpoint

2021-06-10 Thread Guokuai Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guokuai Huang updated FLINK-22946:
--
Description: 
We recently encountered deadlock when using unaligned checkpoint. Below are two 
thread stacks that cause deadlock:
{code:java}
"Channel state writer Join(xx) (34/256)#1": at 
org.apache.flink.runtime.io.network.partition.consumer.BufferManager.notifyBufferAvailable(BufferManager.java:296)
 - waiting to lock <0x0007296dfa90> (a 
org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue)
 at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.fireBufferAvailableNotification(LocalBufferPool.java:507)
 at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.recycle(LocalBufferPool.java:494)
 at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.recycle(LocalBufferPool.java:460)
 at 
org.apache.flink.runtime.io.network.buffer.NetworkBuffer.deallocate(NetworkBuffer.java:182)
 at 
org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110)
 at 
org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100)
 at 
org.apache.flink.runtime.io.network.buffer.NetworkBuffer.recycleBuffer(NetworkBuffer.java:156)
 at 
org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue.addExclusiveBuffer(BufferManager.java:399)
 at 
org.apache.flink.runtime.io.network.partition.consumer.BufferManager.recycle(BufferManager.java:200)
 - locked <0x0007296bc450> (a 
org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue)
 at 
org.apache.flink.runtime.io.network.buffer.NetworkBuffer.deallocate(NetworkBuffer.java:182)
 at 
org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110)
 at 
org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100)
 at 
org.apache.flink.runtime.io.network.buffer.NetworkBuffer.recycleBuffer(NetworkBuffer.java:156)
 at 
org.apache.flink.runtime.checkpoint.channel.ChannelStateCheckpointWriter.write(ChannelStateCheckpointWriter.java:173)
 at 
org.apache.flink.runtime.checkpoint.channel.ChannelStateCheckpointWriter.writeInput(ChannelStateCheckpointWriter.java:131)
 at 
org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest.lambda$write$0(ChannelStateWriteRequest.java:63)
 at 
org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest$$Lambda$785/722492780.accept(Unknown
 Source) at 
org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest.lambda$buildWriteRequest$2(ChannelStateWriteRequest.java:93)
 at 
org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest$$Lambda$786/1360749026.accept(Unknown
 Source) at 
org.apache.flink.runtime.checkpoint.channel.CheckpointInProgressRequest.execute(ChannelStateWriteRequest.java:212)
 at 
org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestDispatcherImpl.dispatchInternal(ChannelStateWriteRequestDispatcherImpl.java:82)
 at 
org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestDispatcherImpl.dispatch(ChannelStateWriteRequestDispatcherImpl.java:59)
 at 
org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.loop(ChannelStateWriteRequestExecutorImpl.java:96)
 at 
org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.run(ChannelStateWriteRequestExecutorImpl.java:75)
 at 
org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl$$Lambda$253/502209879.run(Unknown
 Source) at java.lang.Thread.run(Thread.java:745){code}
{code:java}
"Join(xx) (34/256)#1": at 
org.apache.flink.runtime.io.network.partition.consumer.BufferManager.notifyBufferAvailable(BufferManager.java:296)
 - waiting to lock <0x0007296bc450> (a 
org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue)
 at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.fireBufferAvailableNotification(LocalBufferPool.java:507)
 at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.recycle(LocalBufferPool.java:494)
 at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.recycle(LocalBufferPool.java:460)
 at 
org.apache.flink.runtime.io.network.buffer.NetworkBuffer.deallocate(NetworkBuffer.java:182)
 at 
org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110)
 at 
org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100)
 at 
org.apache.flink.runtime.io.network.buffer.NetworkBuffer.recycleBuffer(NetworkBuffer.java:156)
 at 

[jira] [Updated] (FLINK-22946) Network buffer deadlock introduced by unaligned checkpoint

2021-06-10 Thread Guokuai Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guokuai Huang updated FLINK-22946:
--
Priority: Blocker  (was: Critical)

> Network buffer deadlock introduced by unaligned checkpoint
> --
>
> Key: FLINK-22946
> URL: https://issues.apache.org/jira/browse/FLINK-22946
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.0, 1.13.1
>Reporter: Guokuai Huang
>Priority: Blocker
> Attachments: Screen Shot 2021-06-09 at 6.39.47 PM.png, Screen Shot 
> 2021-06-09 at 7.02.04 PM.png
>
>
> We recently encountered deadlock when using unaligned checkpoint. Below are 
> two thread stacks that cause deadlock:
> {code:java}
> "Channel state writer Join(xx) (34/256)#1":"Channel state writer 
> Join(xx) (34/256)#1": at 
> org.apache.flink.runtime.io.network.partition.consumer.BufferManager.notifyBufferAvailable(BufferManager.java:296)
>  - waiting to lock <0x0007296dfa90> (a 
> org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.fireBufferAvailableNotification(LocalBufferPool.java:507)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.recycle(LocalBufferPool.java:494)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.recycle(LocalBufferPool.java:460)
>  at 
> org.apache.flink.runtime.io.network.buffer.NetworkBuffer.deallocate(NetworkBuffer.java:182)
>  at 
> org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110)
>  at 
> org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100)
>  at 
> org.apache.flink.runtime.io.network.buffer.NetworkBuffer.recycleBuffer(NetworkBuffer.java:156)
>  at 
> org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue.addExclusiveBuffer(BufferManager.java:399)
>  at 
> org.apache.flink.runtime.io.network.partition.consumer.BufferManager.recycle(BufferManager.java:200)
>  - locked <0x0007296bc450> (a 
> org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue)
>  at 
> org.apache.flink.runtime.io.network.buffer.NetworkBuffer.deallocate(NetworkBuffer.java:182)
>  at 
> org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110)
>  at 
> org.apache.flink.shaded.netty4.io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100)
>  at 
> org.apache.flink.runtime.io.network.buffer.NetworkBuffer.recycleBuffer(NetworkBuffer.java:156)
>  at 
> org.apache.flink.runtime.checkpoint.channel.ChannelStateCheckpointWriter.write(ChannelStateCheckpointWriter.java:173)
>  at 
> org.apache.flink.runtime.checkpoint.channel.ChannelStateCheckpointWriter.writeInput(ChannelStateCheckpointWriter.java:131)
>  at 
> org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest.lambda$write$0(ChannelStateWriteRequest.java:63)
>  at 
> org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest$$Lambda$785/722492780.accept(Unknown
>  Source) at 
> org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest.lambda$buildWriteRequest$2(ChannelStateWriteRequest.java:93)
>  at 
> org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequest$$Lambda$786/1360749026.accept(Unknown
>  Source) at 
> org.apache.flink.runtime.checkpoint.channel.CheckpointInProgressRequest.execute(ChannelStateWriteRequest.java:212)
>  at 
> org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestDispatcherImpl.dispatchInternal(ChannelStateWriteRequestDispatcherImpl.java:82)
>  at 
> org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestDispatcherImpl.dispatch(ChannelStateWriteRequestDispatcherImpl.java:59)
>  at 
> org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.loop(ChannelStateWriteRequestExecutorImpl.java:96)
>  at 
> org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl.run(ChannelStateWriteRequestExecutorImpl.java:75)
>  at 
> org.apache.flink.runtime.checkpoint.channel.ChannelStateWriteRequestExecutorImpl$$Lambda$253/502209879.run(Unknown
>  Source) at java.lang.Thread.run(Thread.java:745)"Join(xx) (34/256)#1": 
> at 
> org.apache.flink.runtime.io.network.partition.consumer.BufferManager.notifyBufferAvailable(BufferManager.java:296)
>  - waiting to lock <0x0007296bc450> (a 
> org.apache.flink.runtime.io.network.partition.consumer.BufferManager$AvailableBufferQueue)
>  at 
> 

[jira] [Commented] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs

2021-06-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361338#comment-17361338
 ] 

Xintong Song commented on FLINK-20329:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18891=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361=13401

> Elasticsearch7DynamicSinkITCase hangs
> -
>
> Key: FLINK-20329
> URL: https://issues.apache.org/jira/browse/FLINK-20329
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-11-24T16:04:05.9260517Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2020-11-24T16:19:25.5481231Z 
> ==
> 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds.
> 2020-11-24T16:19:25.5484064Z 
> ==
> 2020-11-24T16:19:25.5484498Z 
> ==
> 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS)
> 2020-11-24T16:19:25.5485475Z 
> ==
> 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar
> 2020-11-24T16:19:25.7263515Z 18566 Jps
> 2020-11-24T16:19:25.7263709Z 959 Launcher
> 2020-11-24T16:19:25.7411148Z 
> ==
> 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192
> 2020-11-24T16:19:25.7427369Z 
> ==
> 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26
> 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.275-b01 mixed mode):
> 2020-11-24T16:19:26.0849831Z 
> 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 
> tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x]
> 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE
> 2020-11-24T16:19:26.0850814Z 
> 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 
> os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() 
> [0x7fc1012c4000]
> 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on 
> object monitor)
> 2020-11-24T16:19:26.0855379Z  at java.lang.Object.wait(Native Method)
> 2020-11-24T16:19:26.0855844Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142)
> 2020-11-24T16:19:26.0857272Z  - locked <0x8e2bd2d0> (a 
> java.util.ArrayList)
> 2020-11-24T16:19:26.0857977Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0858471Z  at 
> org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
> 2020-11-24T16:19:26.0858961Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133)
> 2020-11-24T16:19:26.0859422Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0859788Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-24T16:19:26.0860030Z 
> 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000]
> 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-11-24T16:19:26.0861387Z  at sun.misc.Unsafe.park(Native Method)
> 2020-11-24T16:19:26.0862495Z  - parking to wait for  <0x8814bf30> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-11-24T16:19:26.0863253Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 2020-11-24T16:19:26.0863760Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
> 2020-11-24T16:19:26.0864274Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
> 2020-11-24T16:19:26.0864762Z  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
> 2020-11-24T16:19:26.0865299Z  at 
> 

[jira] [Updated] (FLINK-22419) testScheduleRunAsync fail

2021-06-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22419:
-
Fix Version/s: 1.12.5

> testScheduleRunAsync fail
> -
>
> Key: FLINK-22419
> URL: https://issues.apache.org/jira/browse/FLINK-22419
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Guowei Ma
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.13.1, 1.12.5
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17077=logs=a549b384-c55a-52c0-c451-00e0477ab6db=81f2da51-a161-54c7-5b84-6001fed26530=6833
> {code:java}
> Apr 22 22:56:40 [ERROR] 
> testScheduleRunAsync(org.apache.flink.runtime.rpc.RpcEndpointTest)  Time 
> elapsed: 0.404 s  <<< FAILURE!
> Apr 22 22:56:40 java.lang.AssertionError
> Apr 22 22:56:40   at org.junit.Assert.fail(Assert.java:86)
> Apr 22 22:56:40   at org.junit.Assert.assertTrue(Assert.java:41)
> Apr 22 22:56:40   at org.junit.Assert.assertTrue(Assert.java:52)
> Apr 22 22:56:40   at 
> org.apache.flink.runtime.rpc.RpcEndpointTest.testScheduleRunAsync(RpcEndpointTest.java:318)
> Apr 22 22:56:40   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Apr 22 22:56:40   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 22 22:56:40   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 22 22:56:40   at java.lang.reflect.Method.invoke(Method.java:498)
> Apr 22 22:56:40   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Apr 22 22:56:40   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 22 22:56:40   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Apr 22 22:56:40   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Apr 22 22:56:40   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Apr 22 22:56:40   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Apr 22 22:56:40   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Apr 22 22:56:40   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 22 22:56:40   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 22 22:56:40   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Apr 22 22:56:40   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-22419) testScheduleRunAsync fail

2021-06-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reopened FLINK-22419:
--

Reopen to port the fix to 1.12.

Instance on 1.12:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18891=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=7950

> testScheduleRunAsync fail
> -
>
> Key: FLINK-22419
> URL: https://issues.apache.org/jira/browse/FLINK-22419
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Guowei Ma
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.13.1
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17077=logs=a549b384-c55a-52c0-c451-00e0477ab6db=81f2da51-a161-54c7-5b84-6001fed26530=6833
> {code:java}
> Apr 22 22:56:40 [ERROR] 
> testScheduleRunAsync(org.apache.flink.runtime.rpc.RpcEndpointTest)  Time 
> elapsed: 0.404 s  <<< FAILURE!
> Apr 22 22:56:40 java.lang.AssertionError
> Apr 22 22:56:40   at org.junit.Assert.fail(Assert.java:86)
> Apr 22 22:56:40   at org.junit.Assert.assertTrue(Assert.java:41)
> Apr 22 22:56:40   at org.junit.Assert.assertTrue(Assert.java:52)
> Apr 22 22:56:40   at 
> org.apache.flink.runtime.rpc.RpcEndpointTest.testScheduleRunAsync(RpcEndpointTest.java:318)
> Apr 22 22:56:40   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Apr 22 22:56:40   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 22 22:56:40   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 22 22:56:40   at java.lang.reflect.Method.invoke(Method.java:498)
> Apr 22 22:56:40   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Apr 22 22:56:40   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 22 22:56:40   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Apr 22 22:56:40   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Apr 22 22:56:40   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Apr 22 22:56:40   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Apr 22 22:56:40   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Apr 22 22:56:40   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Apr 22 22:56:40   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Apr 22 22:56:40   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Apr 22 22:56:40   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Apr 22 22:56:40   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22625) FileSinkMigrationITCase unstable

2021-06-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361333#comment-17361333
 ] 

Xintong Song commented on FLINK-22625:
--

[~gaoyunhaii], could you please take a look into this test instability?

> FileSinkMigrationITCase unstable
> 
>
> Key: FLINK-22625
> URL: https://issues.apache.org/jira/browse/FLINK-22625
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17817=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=420bd9ec-164e-562e-8947-0dacde3cec91=22179
> {code}
> May 11 00:43:40 Caused by: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Sink: Unnamed (1/3) of job 733a4777cca170f86724832642e2a8b1 
> has not being executed at the moment. Aborting checkpoint. Failure reason: 
> Not all required tasks are currently running.
> May 11 00:43:40   at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152)
> May 11 00:43:40   at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114)
> May 11 00:43:40   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> May 11 00:43:40   at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> May 11 00:43:40   at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> May 11 00:43:40   at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> May 11 00:43:40   at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> May 11 00:43:40   at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> May 11 00:43:40   at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> May 11 00:43:40   at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> May 11 00:43:40   at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22625) FileSinkMigrationITCase unstable

2021-06-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361332#comment-17361332
 ] 

Xintong Song commented on FLINK-22625:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18890=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=420bd9ec-164e-562e-8947-0dacde3cec91=21739

> FileSinkMigrationITCase unstable
> 
>
> Key: FLINK-22625
> URL: https://issues.apache.org/jira/browse/FLINK-22625
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17817=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=420bd9ec-164e-562e-8947-0dacde3cec91=22179
> {code}
> May 11 00:43:40 Caused by: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Sink: Unnamed (1/3) of job 733a4777cca170f86724832642e2a8b1 
> has not being executed at the moment. Aborting checkpoint. Failure reason: 
> Not all required tasks are currently running.
> May 11 00:43:40   at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152)
> May 11 00:43:40   at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114)
> May 11 00:43:40   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> May 11 00:43:40   at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> May 11 00:43:40   at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> May 11 00:43:40   at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> May 11 00:43:40   at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> May 11 00:43:40   at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> May 11 00:43:40   at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> May 11 00:43:40   at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> May 11 00:43:40   at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22908) FileExecutionGraphInfoStoreTest.testPutSuspendedJobOnClusterShutdown fails on azure

2021-06-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361331#comment-17361331
 ] 

Xintong Song commented on FLINK-22908:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18890=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392=7354

> FileExecutionGraphInfoStoreTest.testPutSuspendedJobOnClusterShutdown fails on 
> azure
> ---
>
> Key: FLINK-22908
> URL: https://issues.apache.org/jira/browse/FLINK-22908
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Xintong Song
>Assignee: Fabian Paul
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.12.5, 1.13.2
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18754=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392=7744
> {code}
> Jun 08 00:03:01 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 4.21 s <<< FAILURE! - in 
> org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStoreTest
> Jun 08 00:03:01 [ERROR] 
> testPutSuspendedJobOnClusterShutdown(org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStoreTest)
>   Time elapsed: 2.763 s  <<< ERROR!
> Jun 08 00:03:01 org.apache.flink.util.FlinkException: Could not close 
> resource.
> Jun 08 00:03:01   at 
> org.apache.flink.util.AutoCloseableAsync.close(AutoCloseableAsync.java:39)
> Jun 08 00:03:01   at 
> org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStoreTest.testPutSuspendedJobOnClusterShutdown(FileExecutionGraphInfoStoreTest.java:349)
> Jun 08 00:03:01   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 08 00:03:01   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 08 00:03:01   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 08 00:03:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jun 08 00:03:01   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jun 08 00:03:01   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jun 08 00:03:01   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jun 08 00:03:01   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jun 08 00:03:01   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jun 08 00:03:01   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jun 08 00:03:01   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Jun 08 00:03:01   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Jun 08 00:03:01   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Jun 08 00:03:01   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Jun 08 00:03:01   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jun 08 00:03:01   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Jun 08 00:03:01   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Jun 08 00:03:01   at 
> 

[jira] [Reopened] (FLINK-22952) docs_404_check fail on azure due to ruby version not available

2021-06-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reopened FLINK-22952:
--

Happens again on master, after the PR is merged.

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18890=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43=404fcc1b-71ae-54f6-61c8-430a6aeff2b5

Seems the PR removed Ruby usage only for the ci tasks. The cron tasks are 
overlooked.

> docs_404_check fail on azure due to ruby version not available
> --
>
> Key: FLINK-22952
> URL: https://issues.apache.org/jira/browse/FLINK-22952
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.12.4
>Reporter: Xintong Song
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.12.5, 1.13.2
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18852=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43=404fcc1b-71ae-54f6-61c8-430a6aeff2b5
> {code}
> Starting: UseRubyVersion
> ==
> Task : Use Ruby version
> Description  : Use the specified version of Ruby from the tool cache, 
> optionally adding it to the PATH
> Version  : 0.186.0
> Author   : Microsoft Corporation
> Help : 
> https://docs.microsoft.com/azure/devops/pipelines/tasks/tool/use-ruby-version
> ==
> ##[error]Version spec = 2.4 for architecture %25s did not match any version 
> in Agent.ToolsDirectory.
> Available versions: /opt/hostedtoolcache
> 2.5.9,2.6.7,2.7.3,3.0.1
> If this is a Microsoft-hosted agent, check that this image supports 
> side-by-side versions of Ruby at https://aka.ms/hosted-agent-software.
> If this is a self-hosted agent, see how to configure side-by-side Ruby 
> versions at https://go.microsoft.com/fwlink/?linkid=2005989.
> Finishing: UseRubyVersion
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19863) SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process failed due to timeout"

2021-06-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361329#comment-17361329
 ] 

Xintong Song commented on FLINK-19863:
--

[~Leonard Xu], this is happening again. Would you like to take another look?

> SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process 
> failed due to timeout"
> ---
>
> Key: FLINK-19863
> URL: https://issues.apache.org/jira/browse/FLINK-19863
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0, 1.12.3
>Reporter: Dian Fu
>Priority: Critical
>  Labels: auto-unassigned, pull-request-available, stale-critical, 
> test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=3425d8ba-5f03-540a-c64b-51b8481bf7d6
> {code}
> 00:50:02,589 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,106 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,741 [main] INFO  
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource [] - Backed up 
> logs to 
> /home/vsts/work/1/s/flink-end-to-end-tests/artifacts/flink-b3924665-1ac9-4309-8255-20f0dc94e7b9.
> 00:50:04,788 [main] INFO  
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource [] - Stopping 
> HBase Cluster
> 00:50:16,243 [main] ERROR 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase   [] - 
> 
> Test testHBase[0: 
> hbase-version:1.4.3](org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) 
> failed with:
> java.io.IOException: Process failed due to timeout.
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:130)
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
>   at 
> org.apache.flink.tests.util.flink.FlinkDistribution.submitSQLJob(FlinkDistribution.java:221)
>   at 
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource$StandaloneClusterController.submitSQLJob(LocalStandaloneFlinkResource.java:196)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.executeSqlStatements(SQLClientHBaseITCase.java:215)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:152)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19863) SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process failed due to timeout"

2021-06-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361328#comment-17361328
 ] 

Xintong Song commented on FLINK-19863:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18885=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=28192

> SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process 
> failed due to timeout"
> ---
>
> Key: FLINK-19863
> URL: https://issues.apache.org/jira/browse/FLINK-19863
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0, 1.12.3
>Reporter: Dian Fu
>Priority: Critical
>  Labels: auto-unassigned, pull-request-available, stale-critical, 
> test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=3425d8ba-5f03-540a-c64b-51b8481bf7d6
> {code}
> 00:50:02,589 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,106 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,741 [main] INFO  
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource [] - Backed up 
> logs to 
> /home/vsts/work/1/s/flink-end-to-end-tests/artifacts/flink-b3924665-1ac9-4309-8255-20f0dc94e7b9.
> 00:50:04,788 [main] INFO  
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource [] - Stopping 
> HBase Cluster
> 00:50:16,243 [main] ERROR 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase   [] - 
> 
> Test testHBase[0: 
> hbase-version:1.4.3](org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) 
> failed with:
> java.io.IOException: Process failed due to timeout.
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:130)
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
>   at 
> org.apache.flink.tests.util.flink.FlinkDistribution.submitSQLJob(FlinkDistribution.java:221)
>   at 
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource$StandaloneClusterController.submitSQLJob(LocalStandaloneFlinkResource.java:196)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.executeSqlStatements(SQLClientHBaseITCase.java:215)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:152)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22963) The description of taskmanager.memory.task.heap.size in the official document is incorrect

2021-06-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22963:
-
Labels: documentation starter  (was: documentation)

> The description of taskmanager.memory.task.heap.size in the official document 
> is incorrect
> --
>
> Key: FLINK-22963
> URL: https://issues.apache.org/jira/browse/FLINK-22963
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: JasonLee
>Priority: Major
>  Labels: documentation, starter
>
> When I studied the memory model of TaskManager, I found that there is a 
> problem in the official document, which is the description of 
> taskmanager.memory.task.heap.size is incorrect.
> According to the official memory model, I think the correct description 
> should be that task Heap Memory size for TaskExecutors. This is the size of 
> JVM heap memory reserved for tasks. If not specified, it will be derived as 
> Total Flink Memory minus Framework Heap Memory, Framework Off-Heap Heap 
> Memory, Task Off-Heap Memory, Managed Memory and Network Memory.
> However, in the official document, the Framework Off-Heap Heap Memory should 
> be subtracted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22963) The description of taskmanager.memory.task.heap.size in the official document is incorrect

2021-06-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22963:
-
Affects Version/s: (was: 1.13.0)
   (was: 1.12.0)
   (was: 1.10.0)
   1.14.0
   1.13.1
   1.12.4

> The description of taskmanager.memory.task.heap.size in the official document 
> is incorrect
> --
>
> Key: FLINK-22963
> URL: https://issues.apache.org/jira/browse/FLINK-22963
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: JasonLee
>Priority: Major
>  Labels: documentation
>
> When I studied the memory model of TaskManager, I found that there is a 
> problem in the official document, which is the description of 
> taskmanager.memory.task.heap.size is incorrect.
> According to the official memory model, I think the correct description 
> should be that task Heap Memory size for TaskExecutors. This is the size of 
> JVM heap memory reserved for tasks. If not specified, it will be derived as 
> Total Flink Memory minus Framework Heap Memory, Framework Off-Heap Heap 
> Memory, Task Off-Heap Memory, Managed Memory and Network Memory.
> However, in the official document, the Framework Off-Heap Heap Memory should 
> be subtracted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22963) The description of taskmanager.memory.task.heap.size in the official document is incorrect

2021-06-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22963:
-
Priority: Major  (was: Minor)

> The description of taskmanager.memory.task.heap.size in the official document 
> is incorrect
> --
>
> Key: FLINK-22963
> URL: https://issues.apache.org/jira/browse/FLINK-22963
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.10.0, 1.12.0, 1.13.0
>Reporter: JasonLee
>Priority: Major
>  Labels: documentation
>
> When I studied the memory model of TaskManager, I found that there is a 
> problem in the official document, which is the description of 
> taskmanager.memory.task.heap.size is incorrect.
> According to the official memory model, I think the correct description 
> should be that task Heap Memory size for TaskExecutors. This is the size of 
> JVM heap memory reserved for tasks. If not specified, it will be derived as 
> Total Flink Memory minus Framework Heap Memory, Framework Off-Heap Heap 
> Memory, Task Off-Heap Memory, Managed Memory and Network Memory.
> However, in the official document, the Framework Off-Heap Heap Memory should 
> be subtracted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22963) The description of taskmanager.memory.task.heap.size in the official document is incorrect

2021-06-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22963:
-
Issue Type: Bug  (was: Improvement)

> The description of taskmanager.memory.task.heap.size in the official document 
> is incorrect
> --
>
> Key: FLINK-22963
> URL: https://issues.apache.org/jira/browse/FLINK-22963
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.10.0, 1.12.0, 1.13.0
>Reporter: JasonLee
>Priority: Minor
>  Labels: documentation
>
> When I studied the memory model of TaskManager, I found that there is a 
> problem in the official document, which is the description of 
> taskmanager.memory.task.heap.size is incorrect.
> According to the official memory model, I think the correct description 
> should be that task Heap Memory size for TaskExecutors. This is the size of 
> JVM heap memory reserved for tasks. If not specified, it will be derived as 
> Total Flink Memory minus Framework Heap Memory, Framework Off-Heap Heap 
> Memory, Task Off-Heap Memory, Managed Memory and Network Memory.
> However, in the official document, the Framework Off-Heap Heap Memory should 
> be subtracted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20988) Various resource allocation improvements for active resource managers

2021-06-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-20988:
-
Labels: Umbrella  (was: Umbrella stale-assigned)

> Various resource allocation improvements for active resource managers
> -
>
> Key: FLINK-20988
> URL: https://issues.apache.org/jira/browse/FLINK-20988
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Major
>  Labels: Umbrella
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22968) String type cannot be used when creating Flink SQL

2021-06-10 Thread DaChun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DaChun updated FLINK-22968:
---
Docs Text: 
package com.bytedance.one

import org.apache.flink.streaming.api.scala.{DataStream, 
StreamExecutionEnvironment}
import org.apache.flink.table.api.Table
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
import org.apache.flink.api.scala._


object test {
  def main(args: Array[String]): Unit = {


val env: StreamExecutionEnvironment = StreamExecutionEnvironment
  .createLocalEnvironmentWithWebUI()

val stream: DataStream[String] = env.readTextFile("data/wc.txt")

val tableEnvironment: StreamTableEnvironment = 
StreamTableEnvironment.create(env)


val table: Table = tableEnvironment.fromDataStream(stream)

tableEnvironment.createTemporaryView("wc", table)

val res: Table = tableEnvironment.sqlQuery("select * from wc")

tableEnvironment.toAppendStream[String](res).print()


env.execute("test")
  }
}



  was:
package com.bytedance.one

import org.apache.flink.streaming.api.scala.{DataStream, 
StreamExecutionEnvironment}
import org.apache.flink.table.api.Table
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
import org.apache.flink.api.scala._


object test {
  def main(args: Array[String]): Unit = {


val env: StreamExecutionEnvironment = StreamExecutionEnvironment
  .createLocalEnvironmentWithWebUI()

val stream: DataStream[String] = env.readTextFile("data/wc.txt")

val tableEnvironment: StreamTableEnvironment = 
StreamTableEnvironment.create(env)


val table: Table = tableEnvironment.fromDataStream(stream)

tableEnvironment.createTemporaryView("wc", table)

val res: Table = tableEnvironment.sqlQuery("select * from wc")

tableEnvironment.toAppendStream[String](res).print()


env.execute("DataStreamApp")
  }
}




>  String type cannot be used when creating Flink SQL
> ---
>
> Key: FLINK-22968
> URL: https://issues.apache.org/jira/browse/FLINK-22968
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.3, 1.13.1
> Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}
>Reporter: DaChun
>Priority: Blocker
> Attachments: test.scala, this_error.txt
>
>
> When I run the program, the error is as follows:
> The specific error is in this_ error.txt,
> But the error prompts me to write an issue
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> It's very easy to read a stream and convert it into a table mode. The generic 
> type is string
> Then an error is reported. The code is in test. Scala and the error is in 
> this_ error.txt
> But I try to create an entity class by myself, and then assign generics to 
> this entity class, and then I can pass it normally,
> Or with row type, do you have to create your own entity class, not with 
> string type?
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22968) String type cannot be used when creating Flink SQL

2021-06-10 Thread DaChun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DaChun updated FLINK-22968:
---
Environment: {color:#FF}*Flink-1.13.1 and Flink-1.12.1*{color}  (was: 
D:\anzhuang\jdk_1.8_181\bin\java.exe "-javaagent:D:\anzhuang\idea_web\IntelliJ 
IDEA 2019.3.1\lib\idea_rt.jar=51865:D:\anzhuang\idea_web\IntelliJ IDEA 
2019.3.1\bin" -Dfile.encoding=UTF-8 -classpath 

[jira] [Created] (FLINK-22968) String type cannot be used when creating Flink SQL

2021-06-10 Thread DaChun (Jira)
DaChun created FLINK-22968:
--

 Summary:  String type cannot be used when creating Flink SQL
 Key: FLINK-22968
 URL: https://issues.apache.org/jira/browse/FLINK-22968
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.13.1, 1.12.3
 Environment: D:\anzhuang\jdk_1.8_181\bin\java.exe 
"-javaagent:D:\anzhuang\idea_web\IntelliJ IDEA 
2019.3.1\lib\idea_rt.jar=51865:D:\anzhuang\idea_web\IntelliJ IDEA 2019.3.1\bin" 
-Dfile.encoding=UTF-8 -classpath 

[jira] [Closed] (FLINK-19550) StreamingFileSink can't be interrupted when writing to S3

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot closed FLINK-19550.
--
Resolution: Auto Closed

This issue was labeled "stale-minor" 7 ago and has not received any updates so 
I have gone ahead and closed it.  If you are still affected by this or would 
like to raise the priority of this ticket please re-open, removing the label 
"auto-closed" and raise the ticket priority accordingly.


> StreamingFileSink can't be interrupted when writing to S3
> -
>
> Key: FLINK-19550
> URL: https://issues.apache.org/jira/browse/FLINK-19550
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.12.0
>Reporter: Roman Khachatryan
>Priority: Minor
>  Labels: auto-closed
>
> Reported in 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Network-issue-leading-to-quot-No-pooled-slot-available-quot-td38553.html
>  :
> StreamingFileSink uses hadoop S3AFileSystem for s3a, which in turn uses 
> amazon aws sdk.
> The version of amazon aws sdk used doesn't respect interrupts. However, a 
> newer Amazon 1.11.878 SDK version does.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22593) SavepointITCase.testShouldAddEntropyToSavepointPath unstable

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22593:
---
Labels: stale-blocker stale-critical test-stability  (was: stale-critical 
test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as a 
Blocker but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 1 days. I have gone ahead and marked it "stale-blocker". If this 
ticket is a Blocker, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> SavepointITCase.testShouldAddEntropyToSavepointPath unstable
> 
>
> Key: FLINK-22593
> URL: https://issues.apache.org/jira/browse/FLINK-22593
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.0
>Reporter: Robert Metzger
>Priority: Blocker
>  Labels: stale-blocker, stale-critical, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=9072=logs=cc649950-03e9-5fae-8326-2f1ad744b536=51cab6ca-669f-5dc0-221d-1e4f7dc4fc85
> {code}
> 2021-05-07T10:56:20.9429367Z May 07 10:56:20 [ERROR] Tests run: 13, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 33.441 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.SavepointITCase
> 2021-05-07T10:56:20.9445862Z May 07 10:56:20 [ERROR] 
> testShouldAddEntropyToSavepointPath(org.apache.flink.test.checkpointing.SavepointITCase)
>   Time elapsed: 2.083 s  <<< ERROR!
> 2021-05-07T10:56:20.9447106Z May 07 10:56:20 
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Sink: Unnamed (3/4) of job 4e155a20f0a7895043661a6446caf1cb 
> has not being executed at the moment. Aborting checkpoint. Failure reason: 
> Not all required tasks are currently running.
> 2021-05-07T10:56:20.9448194Z May 07 10:56:20  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2021-05-07T10:56:20.9448797Z May 07 10:56:20  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2021-05-07T10:56:20.9449428Z May 07 10:56:20  at 
> org.apache.flink.test.checkpointing.SavepointITCase.submitJobAndTakeSavepoint(SavepointITCase.java:305)
> 2021-05-07T10:56:20.9450160Z May 07 10:56:20  at 
> org.apache.flink.test.checkpointing.SavepointITCase.testShouldAddEntropyToSavepointPath(SavepointITCase.java:273)
> 2021-05-07T10:56:20.9450785Z May 07 10:56:20  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-05-07T10:56:20.9451331Z May 07 10:56:20  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-05-07T10:56:20.9451940Z May 07 10:56:20  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-05-07T10:56:20.9452498Z May 07 10:56:20  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-05-07T10:56:20.9453247Z May 07 10:56:20  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-05-07T10:56:20.9454007Z May 07 10:56:20  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-05-07T10:56:20.9454687Z May 07 10:56:20  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-05-07T10:56:20.9455302Z May 07 10:56:20  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-05-07T10:56:20.9455909Z May 07 10:56:20  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-05-07T10:56:20.9456493Z May 07 10:56:20  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-05-07T10:56:20.9457074Z May 07 10:56:20  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-05-07T10:56:20.9457636Z May 07 10:56:20  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2021-05-07T10:56:20.9458157Z May 07 10:56:20  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-05-07T10:56:20.9458678Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-05-07T10:56:20.9459252Z May 07 10:56:20  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-05-07T10:56:20.9459865Z May 07 10:56:20  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-05-07T10:56:20.9460433Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 

[jira] [Updated] (FLINK-22443) can not be execute an extreme long sql under batch mode

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22443:
---

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as a 
Blocker but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 1 days. I have gone ahead and marked it "stale-blocker". If this 
ticket is a Blocker, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> can not be execute an extreme long sql under batch mode
> ---
>
> Key: FLINK-22443
> URL: https://issues.apache.org/jira/browse/FLINK-22443
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
> Environment: execute command
>  
> {code:java}
> bin/sql-client.sh embedded -d conf/sql-client-batch.yaml 
> {code}
> content of conf/sql-client-batch.yaml
>  
> {code:java}
> catalogs:
> - name: bnpmphive
>   type: hive
>   hive-conf-dir: /home/gum/hive/conf
>   hive-version: 3.1.2
> execution:
>   planner: blink
>   type: batch
>   #type: streaming
>   result-mode: table
>   parallelism: 4
>   max-parallelism: 2000
>   current-catalog: bnpmphive
>   #current-database: snmpprobe 
> #configuration:
> #  table.sql-dialect: hivemodules:
>- name: core
>  type: core
>- name: myhive
>  type: hivedeployment:
>   # general cluster communication timeout in ms
>   response-timeout: 5000
>   # (optional) address from cluster to gateway
>   gateway-address: ""
>   # (optional) port from cluster to gateway
>   gateway-port: 0
> {code}
>  
>Reporter: macdoor615
>Priority: Blocker
>  Labels: stale-blocker, stale-critical
> Attachments: flink-gum-taskexecutor-8-hb3-prod-hadoop-002.log.4.zip, 
> raw_p_restapi_hcd.csv.zip
>
>
> 1. execute an extreme long sql under batch mode
>  
> {code:java}
> select
> 'CD' product_name,
> r.code business_platform,
> 5 statisticperiod,
> cast('2021-03-24 00:00:00' as timestamp) coltime,
> cast(r1.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_2,
> cast(r2.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_7,
> cast(r3.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_5,
> cast(r4.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_6,
> cast(r5.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00029,
> cast(r6.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00028,
> cast(r7.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00015,
> cast(r8.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00014,
> cast(r9.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00011,
> cast(r10.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00010,
> cast(r11.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00013,
> cast(r12.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00012,
> cast(r13.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00027,
> cast(r14.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00026,
> cast(r15.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00046,
> cast(r16.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00047,
> cast(r17.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00049,
> cast(r18.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00048,
> cast(r19.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00024,
> cast(r20.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00025,
> cast(r21.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00022,
> cast(r22.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00023,
> cast(r23.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00054,
> cast(r24.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00055,
> cast(r25.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00033,
> cast(r26.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00032,
> cast(r27.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00053,
> cast(r28.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00052,
> cast(r29.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00051,
> cast(r30.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00050,
> cast(r31.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00043,
> cast(r32.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00042,
> cast(r33.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00017,
> cast(r34.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00016,
> cast(r35.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_3,
> cast(r36.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00045,
> cast(r37.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00044,
> cast(r38.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00038,
> cast(r39.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00039,
> cast(r40.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00037,
> 

[jira] [Updated] (FLINK-10794) Do not create checkpointStorage when checkpoint is disabled

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10794:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Do not create checkpointStorage when checkpoint is disabled
> ---
>
> Key: FLINK-10794
> URL: https://issues.apache.org/jira/browse/FLINK-10794
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Reporter: Congxian Qiu
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> In `StreamTask#invoke` will create CheckpointStore by 
> `stateBackend.createCheckpointStorage`, and create some directories.
> But if the checkpoint is disabled, we could skip the creation of 
> checkpointStore.
>  
> IMO, the code could change to something like below
> {code:java}
> boolean enabledCheckpoint = 
> configuration.getConfiguration().getBoolean("checkpointing", false);
> if (enabledCheckpoint) {
>checkpointStorage = 
> stateBackend.createCheckpointStorage(getEnvironment().getJobID());
> }
> {code}
>  
> or just skip the directories creation when create CheckpointStorage



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-2491) Support Checkpoints After Tasks Finished

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-2491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-2491:
--
  Labels: auto-deprioritized-critical  (was: stale-critical)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> Support Checkpoints After Tasks Finished
> 
>
> Key: FLINK-2491
> URL: https://issues.apache.org/jira/browse/FLINK-2491
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing
>Affects Versions: 0.10.0
>Reporter: Robert Metzger
>Assignee: Yun Gao
>Priority: Major
>  Labels: auto-deprioritized-critical
> Fix For: 1.14.0
>
> Attachments: fix_checkpoint_not_working_if_tasks_are_finished.patch
>
>
> While implementing a test case for the Kafka Consumer, I came across the 
> following bug:
> Consider the following topology, with the operator parallelism in parentheses:
> Source (2) --> Sink (1).
> In this setup, the {{snapshotState()}} method is called on the source, but 
> not on the Sink.
> The sink receives the generated data.
> only one of the two sources is generating data.
> I've implemented a test case for this, you can find it here: 
> https://github.com/rmetzger/flink/blob/para_checkpoint_bug/flink-tests/src/test/java/org/apache/flink/test/checkpointing/ParallelismChangeCheckpoinedITCase.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10806) Support multiple consuming offsets when discovering a new topic

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10806:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support multiple consuming offsets when discovering a new topic
> ---
>
> Key: FLINK-10806
> URL: https://issues.apache.org/jira/browse/FLINK-10806
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.6.2, 1.8.1
>Reporter: Jiayi Liao
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> In KafkaConsumerBase, we discover the TopicPartitions and compare them with 
> the restoredState. It's reasonable when a topic's partitions scaled. However, 
> if we add a new topic which has too much data and restore the Flink program, 
> the data of the new topic will be consumed from the start, which may not be 
> what we want.  I think this should be an option for developers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10834) TableAPI flatten() calculated value error

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10834:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> TableAPI flatten() calculated value error
> -
>
> Key: FLINK-10834
> URL: https://issues.apache.org/jira/browse/FLINK-10834
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: sunjincheng
>Priority: Major
>  Labels: auto-unassigned, stale-major
> Fix For: 1.7.3
>
>
> We have a UDF as follows:
> {code:java}
> object FuncRow extends ScalarFunction {
> def eval(v: Int): Row = { 
>   val version = "" + new Random().nextInt()          
>   val row = new Row(3)          
>   row.setField(0, version)          
>   row.setField(1, version)          
>   row.setField(2, version)          
>   row 
> }
> override def isDeterministic: Boolean = false
> override def getResultType(signature: Array[Class[_]]): TypeInformation[_] =
>  Types.ROW(Types.STRING, Types.STRING, Types.STRING)
> }
> {code}
> Do the following Query:
> {code:sql}
> val data = new mutable.MutableList[(Int, Long, String)]
>  data.+=((1, 1L, "Hi"))
>  val ds = env.fromCollection(data).toTable(tEnv, 'a, 'b,'c)
>  .select(FuncRow('a).flatten()).as('v1, 'v2, 'v3)
> {code}
> The result is : -1189206469,-151367792,1988676906
> The result expected by the user should be:  v1==v2==v3 .
> It looks the real reason is that there is no result of the reuse in codegen.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10855) CheckpointCoordinator does not delete checkpoint directory of late/failed checkpoints

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10855:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> CheckpointCoordinator does not delete checkpoint directory of late/failed 
> checkpoints
> -
>
> Key: FLINK-10855
> URL: https://issues.apache.org/jira/browse/FLINK-10855
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.5.5, 1.6.2, 1.7.0
>Reporter: Till Rohrmann
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> In case that an acknowledge checkpoint message is late or a checkpoint cannot 
> be acknowledged, we discard the subtask state in the 
> {{CheckpointCoordinator}}. What's not happening in this case is that we 
> delete the parent directory of the checkpoint. This only happens when we 
> dispose a {{PendingCheckpoint#dispose}}. 
> Due to this behaviour it can happen that a checkpoint fails (e.g. a task not 
> being ready) and we delete the checkpoint directory. Next another task writes 
> its checkpoint data to the checkpoint directory (thereby creating it again) 
> and sending an acknowledge message back to the {{CheckpointCoordinator}}. The 
> {{CheckpointCoordinator}} will realize that there is no longer a 
> {{PendingCheckpoint}} and will discard the sub task state. This will remove 
> the state files from the checkpoint directory but will leave the checkpoint 
> directory untouched.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19380) Add support for a gRPC transport for the RequestReply protocol.

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-19380:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add support for a gRPC transport for the RequestReply protocol.
> ---
>
> Key: FLINK-19380
> URL: https://issues.apache.org/jira/browse/FLINK-19380
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Priority: Major
>  Labels: stale-major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10882) Misleading job/task state for scheduled jobs

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10882:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Misleading job/task state for scheduled jobs
> 
>
> Key: FLINK-10882
> URL: https://issues.apache.org/jira/browse/FLINK-10882
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: auto-unassigned, stale-major
> Attachments: list_view.png, task_view.png
>
>
> When submitting a job when not enough resources are available currently 
> cuases the job  stay in a {{CREATE/SCHEDULED}} state.
> There are 2 issues with how this is presented in the UI.
> The {{Running Jobs}} page incorrectly states that the job is running.
> (see list_view attachment)
> EDIT: Actually, from a runtime perspective the job is in fact in a RUNNING 
> state.
> The state display for individual tasks either
> # States the task is in a CREATED state, when it is actually SCHEDULED
> # States the task is in a CREATED state, but the count for all state boxes is 
> zero.
> (see task_view attachment)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20574) Throttle number of remote invocation requests on startup or restores with large backlogs

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20574:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Throttle number of remote invocation requests on startup or restores with 
> large backlogs
> 
>
> Key: FLINK-20574
> URL: https://issues.apache.org/jira/browse/FLINK-20574
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
>
> On startup or restores, the {{RequestReplyFunction}} may heavily load the 
> remote functions with multiple concurrent invocation requests if there is a 
> large backlog of restored or historical events to process through.
> The new protocol introduced by FLINK-20265 emphasizes this much more due to 
> the nature of extra invocation roundtrips if the function has state 
> declarations (i.e., the first hoard of concurrent invocations would all fail 
> with an {{IncompleteInvocationContext}} and requires invocation patching + 
> state registrations).
> We should think about how to apply throttling to mitigate these scenarios.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21308) Cancel "sendAfter"

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21308:
---
Labels: auto-deprioritized-major stale-major  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Cancel "sendAfter"
> --
>
> Key: FLINK-21308
> URL: https://issues.apache.org/jira/browse/FLINK-21308
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Reporter: Stephan Pelikan
>Priority: Major
>  Labels: auto-deprioritized-major, stale-major
> Fix For: statefun-3.1.0
>
>
> As a user I want to cancel delayed sent messages not needed any more to keep 
> state clean.
> Use case:
> {quote}My use-case is processing business events of customers. Those events 
> are triggered by ourself or by the customer depending of what's the current 
> state of the ongoing customer's business use-case. We need to monitor 
> delayed/missing business events which belong to previous events. For example: 
> the customer has to confirm something we did. Depending on what it is the 
> confirmation has to be within hours, days or even months. If there is a delay 
> we need to know. But if the customer confirms in time we want to cleanup to 
> keep the state small.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-11070) Add stream-stream non-window cross join

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11070:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add stream-stream non-window cross join
> ---
>
> Key: FLINK-11070
> URL: https://issues.apache.org/jira/browse/FLINK-11070
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Hequn Cheng
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Currently, we don't reorder join and rely on the order provided by the user. 
> This is fine for most of the cases, however, it limits the set of supported 
> SQL queries.
> Example:
> {code:java}
> val streamUtil: StreamTableTestUtil = streamTestUtil()
> streamUtil.addTable[(Int, String, Long)]("MyTable", 'a, 'b, 'c.rowtime, 
> 'proctime.proctime)
> streamUtil.addTable[(Int, String, Long)]("MyTable2", 'a, 'b, 'c.rowtime, 
> 'proctime.proctime)
> streamUtil.addTable[(Int, String, Long)]("MyTable3", 'a, 'b, 'c.rowtime, 
> 'proctime.proctime)
> val sqlQuery =
>   """
> |SELECT t1.a, t3.b
> |FROM MyTable3 t3, MyTable2 t2, MyTable t1
> |WHERE t1.a = t3.a AND t1.a = t2.a
> |""".stripMargin
> streamUtil.printSql(sqlQuery)
> {code}
> Given the current rule sets, this query produces a cross join which is not 
> supported and thus leads to:
> {code:java}
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query: 
> LogicalProject(a=[$8], b=[$1])
>   LogicalFilter(condition=[AND(=($8, $0), =($8, $4))])
> LogicalJoin(condition=[true], joinType=[inner])
>   LogicalJoin(condition=[true], joinType=[inner])
> LogicalTableScan(table=[[_DataStreamTable_2]])
> LogicalTableScan(table=[[_DataStreamTable_1]])
>   LogicalTableScan(table=[[_DataStreamTable_0]])
> This exception indicates that the query uses an unsupported SQL feature.
> Please check the documentation for the set of currently supported SQL 
> features.
> {code}
> In order to support more queries, it would be nice to have cross join on 
> streaming. We can start from a simple version, for example, call 
> forceNonParallel() for connectOperator in `DataStreamJoin` when it is a cross 
> join. The performance may be bad. But it works fine if the two tables of 
> cross join are small ones. 
> We can do some optimizations later, such as broadcasting the smaller side, 
> etc.
> Any suggestions are greatly appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-11141) Key generation for RocksDBMapState can theoretically be ambiguous

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11141:
---
Labels: auto-deprioritized-critical stale-major  (was: 
auto-deprioritized-critical)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Key generation for RocksDBMapState can theoretically be ambiguous
> -
>
> Key: FLINK-11141
> URL: https://issues.apache.org/jira/browse/FLINK-11141
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.5.5, 1.6.2, 1.7.0
>Reporter: Stefan Richter
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-major
>
> RocksDBMap state stores values in RocksDB under a composite key from the 
> serialized bytes of {{key-group-id|key|namespace|user-key}}. In this 
> composition, key, namespace, and user-key can either have fixed sized or 
> variable sized serialization formats. In cases of at least 2 variable 
> formats, ambiguity can be possible, e.g.:
> abcd <-> efg
> abc <-> defg
> Our code takes care of this for all other states, where composite keys only 
> consist of key and namespace by checking for 2x variable size and appending 
> the serialized length to each byte sequence.
> However, for map state there is no inclusion of the user-key in the check for 
> potential ambiguity, as well as for appending the size. This means that, in 
> theory, some combinations can produce colliding composite keys in RocksDB. 
> What is required is to include the user-key serializer in the check and 
> append the length there as well.
> Please notice that this cannot be simply changed because it has implications 
> for backwards compatibility and requires some form of migration for the state 
> keys on restore.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10873) Remove tableEnv in DataSetConversions#toTable and DataStreamConversions#toTable

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10873:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Remove tableEnv in DataSetConversions#toTable and 
> DataStreamConversions#toTable
> ---
>
> Key: FLINK-10873
> URL: https://issues.apache.org/jira/browse/FLINK-10873
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jeff Zhang
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> What I would like to achieve is to change the following code
> {code}
> val table = data.flatMap(line=>line.split("\\s"))
>   .map(w => (w, 1))
>   .toTable(tEnv, 'word, 'count)
> {code}
> to this
> {code}
> val table = data.flatMap(line=>line.split("\\s"))
>   .map(w => (w, 1))
>   .toTable('word, 'count)
> {code}
> The only change is that tableEnv is removed in method toTable.  I think the 
> second piece of code is more readable. We can create TableEnvironment based 
> on the ExecutionEnvironment of DataSet/DataStream rather than asking user to 
> pass it explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-11060) Unable to set number of task manager and slot per task manager in scala shell local mode

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11060:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Unable to set number of task manager and slot per task manager in scala shell 
> local mode
> 
>
> Key: FLINK-11060
> URL: https://issues.apache.org/jira/browse/FLINK-11060
> Project: Flink
>  Issue Type: Bug
>  Components: Scala Shell
>Affects Versions: 1.7.0
>Reporter: Jeff Zhang
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
>
> In scala-shell, I can not change the number of task manager and slot per task 
> manager, it is hard coded to 1. I can not specify them in flink-conf.yaml



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22626) KafkaITCase.testTimestamps fails on Azure

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22626:
---
Labels: stale-major test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> KafkaITCase.testTimestamps fails on Azure
> -
>
> Key: FLINK-22626
> URL: https://issues.apache.org/jira/browse/FLINK-22626
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.3
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: stale-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17819=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=c1d93a6a-ba91-515d-3196-2ee8019fbda7=6708
> {code}
> Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
> reading field 'api_keys': Error reading array of size 131096, only 50 bytes 
> available
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:110)
>   at 
> org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:324)
>   at 
> org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:162)
>   at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:719)
>   at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:833)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:556)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
>   at 
> org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:368)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1926)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1894)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer.getAllPartitionsForTopics(KafkaPartitionDiscoverer.java:75)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.discoverPartitions(AbstractPartitionDiscoverer.java:133)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:577)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:428)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-11102) Enable check for previous data in SpanningRecordSerializer

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11102:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Enable check for previous data in SpanningRecordSerializer
> --
>
> Key: FLINK-11102
> URL: https://issues.apache.org/jira/browse/FLINK-11102
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.8.0
>Reporter: Nico Kruber
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> {{SpanningRecordSerializer}} only verifies that there is no left-over data 
> from a previous serialization call if {{SpanningRecordSerializer#CHECKED}} is 
> {{true}} but this is hard-coded as {{false}}. Now if there was previous data, 
> we would silently drop this and continue with our data. The deserializer 
> would probably notice and fail but identifying the root cause may not be as 
> easy anymore.
> -> We should enable that check by default since the only thing it does is to 
> verify {{!java.nio.Buffer#hasRemaining()}} which cannot be too expensive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22441) In Flink v1.11.3 contains netty(version:3.10.6) netty(version:4.1.60) . There are many vulnerabilities, like CVE-2021-21409 etc. please confirm these version and fix. th

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22441:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> In Flink v1.11.3 contains netty(version:3.10.6) netty(version:4.1.60) . There 
> are many vulnerabilities, like CVE-2021-21409 etc. please confirm these 
> version and fix. thx
> --
>
> Key: FLINK-22441
> URL: https://issues.apache.org/jira/browse/FLINK-22441
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.3, 1.12.2, 1.13.0
>Reporter: 张健
>Priority: Major
>  Labels: stale-major
>
> In Flink v1.11.3 contains netty(version:3.10.6) netty(version:4.1.60) . There 
> are many vulnerabilities, like CVE-2021-21409 CVE-2021-21295 etc. please 
> confirm these version and fix. thx



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22532) Improve the support for remote functions in the DataStream integration

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22532:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Improve the support for remote functions in the DataStream integration 
> ---
>
> Key: FLINK-22532
> URL: https://issues.apache.org/jira/browse/FLINK-22532
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Affects Versions: statefun-3.0.0
>Reporter: Igal Shilman
>Priority: Major
>  Labels: stale-major
> Fix For: statefun-3.1.0
>
>
> While looking at 
> [RoutableMessabeBuilder.java|https://github.com/apache/flink-statefun/blob/master/statefun-flink/statefun-flink-core/src/main/java/org/apache/flink/statefun/flink/core/message/RoutableMessageBuilder.java#L57]
>  it is not that clear that the argument for a remote function needs to be of 
> type TypedValue. 
> We need to think of how to improve the end experience for this use case. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16627) Support only generate non-null values when serializing into JSON

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16627:
---
  Labels: auto-deprioritized-major auto-unassigned sprint  (was: 
auto-unassigned sprint stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Support only generate non-null values when serializing into JSON
> 
>
> Key: FLINK-16627
> URL: https://issues.apache.org/jira/browse/FLINK-16627
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Planner
>Affects Versions: 1.10.0
>Reporter: jackray wang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, sprint
>
> {code:java}
> //sql
> CREATE TABLE sink_kafka ( subtype STRING , svt STRING ) WITH (……)
> {code}
>  
> {code:java}
> //sql
> CREATE TABLE source_kafka ( subtype STRING , svt STRING ) WITH (……)
> {code}
>  
> {code:java}
> //scala udf
> class ScalaUpper extends ScalarFunction {
> def eval(str: String) : String= { 
>if(str == null){
>return ""
>}else{
>return str
>}
> }
> 
> }
> btenv.registerFunction("scala_upper", new ScalaUpper())
> {code}
>  
> {code:java}
> //sql
> insert into sink_kafka select subtype, scala_upper(svt)  from source_kafka
> {code}
>  
>  
> 
> Sometimes the svt's value is null, inert into kafkas json like  
> \{"subtype":"qin","svt":null}
> If the amount of data is small, it is acceptable,but we process 10TB of data 
> every day, and there may be many nulls in the json, which affects the 
> efficiency. If you can add a parameter to remove the null key when defining a 
> sinktable, the performance will be greatly improved
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16609) Promotes the column name representation of SQL-CLI

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16609:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Promotes the column name representation of SQL-CLI
> --
>
> Key: FLINK-16609
> URL: https://issues.apache.org/jira/browse/FLINK-16609
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: Danny Chen
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
> Fix For: 1.14.0
>
>
> The SQL-CLI now outputs the column name as the name that comes from the plan, 
> which is  not that readable sometimes, i can think of 2 cases that can be 
> promoted:
> * Expression like "a + b" should output "a + b" (i.e. the MySQL CLI) instead 
> of "$Expr{index}"
> * We should always output the alias if it's there, now, the alias in the plan 
> may be dropped because of 2 reasons:
> 1. Project remove rule would remove the project without considering the 
> alias
> 2. after CALCITE-3713, some CALC/PROJ would be reused while ignoring the 
> column alias



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16688) Hive-connector should set SessionState for hive

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16688:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Hive-connector should set SessionState for hive
> ---
>
> Key: FLINK-16688
> URL: https://issues.apache.org/jira/browse/FLINK-16688
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> Without SessionState like GenericUDFUnixTimeStamp can not be used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20217) More fine-grained timer processing

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20217:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> More fine-grained timer processing
> --
>
> Key: FLINK-20217
> URL: https://issues.apache.org/jira/browse/FLINK-20217
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.10.2, 1.11.2, 1.12.0
>Reporter: Nico Kruber
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> Timers are currently processed in one big block under the checkpoint lock 
> (under {{InternalTimerServiceImpl#advanceWatermark}}. This can be problematic 
> in a number of scenarios while doing checkpointing which would lead to 
> checkpoints timing out (and even unaligned checkpoints would not help).
> If you have a huge number of timers to process when advancing the watermark 
> and the task is also back-pressured, the situation may actually be worse 
> since you would block on the checkpoint lock and also wait for 
> buffers/credits from the receiver.
> I propose to make this loop more fine-grained so that it is interruptible by 
> checkpoints, but maybe there is also some other way to improve here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16765) Replace all BatchTableEnvironment to StreamTableEnvironment in the document of PyFlink

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16765:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Replace all BatchTableEnvironment to StreamTableEnvironment in the document 
> of PyFlink
> --
>
> Key: FLINK-16765
> URL: https://issues.apache.org/jira/browse/FLINK-16765
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Documentation
>Reporter: Hequn Cheng
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> For example, in the 
> [tutorial|https://ci.apache.org/projects/flink/flink-docs-master/getting-started/walkthroughs/python_table_api.html],
>  replace the BatchTableEnvironment to StreamTableEnvironment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16708) When a JDBC connection has been closed, the retry policy of the JDBCUpsertOutputFormat cannot take effect

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16708:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> When a JDBC connection has been closed, the retry policy of the 
> JDBCUpsertOutputFormat cannot take effect 
> --
>
> Key: FLINK-16708
> URL: https://issues.apache.org/jira/browse/FLINK-16708
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Shangwen Tang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In our test environment, I used the tcpkill command to simulate a scenario 
> where the postgresql connection was closed. I found that the retry strategy 
> of the flush method did not take effect
> {code:java}
> 2020-03-20 21:16:18.246 [jdbc-upsert-output-format-thread-1] ERROR 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat  - JDBC executeBatch 
> error, retry times = 1
>  org.postgresql.util.PSQLException: This connection has been closed.
>  at 
> org.postgresql.jdbc.PgConnection.checkClosed(PgConnection.java:857)
>  at 
> org.postgresql.jdbc.PgConnection.getAutoCommit(PgConnection.java:817)
>  at 
> org.postgresql.jdbc.PgStatement.internalExecuteBatch(PgStatement.java:813)
>  at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:873)
>  at 
> org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1569)
>  at 
> org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter.executeBatch(AppendOnlyWriter.java:62)
>  at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.flush(JDBCUpsertOutputFormat.java:159)
>  at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.lambda$open$0(JDBCUpsertOutputFormat.java:124)
>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  2020-03-20 21:16:21.247 [jdbc-upsert-output-format-thread-1] ERROR 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat  - JDBC executeBatch 
> error, retry times = 1
>  org.postgresql.util.PSQLException: This connection has been closed.
>  at 
> org.postgresql.jdbc.PgConnection.checkClosed(PgConnection.java:857)
>  at 
> org.postgresql.jdbc.PgConnection.getAutoCommit(PgConnection.java:817)
>  at 
> org.postgresql.jdbc.PgStatement.internalExecuteBatch(PgStatement.java:813)
>  at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:873)
>  at 
> org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1569)
>  at 
> org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter.executeBatch(AppendOnlyWriter.java:62)
>  at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.flush(JDBCUpsertOutputFormat.java:159)
>  at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.lambda$open$0(JDBCUpsertOutputFormat.java:124)
>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira

[jira] [Updated] (FLINK-22465) KafkaSourceITCase.testValueOnlyDeserializer hangs

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22465:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> KafkaSourceITCase.testValueOnlyDeserializer hangs
> -
>
> Key: FLINK-22465
> URL: https://issues.apache.org/jira/browse/FLINK-22465
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17104=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=28977
> {code:java}
>   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
>   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>   at java.util.Iterator.forEachRemaining(Iterator.java:115)
>   at 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase.testValueOnlyDeserializer(KafkaSourceITCase.java:155)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.jav
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22068) FlinkKinesisConsumerTest.testPeriodicWatermark fails on azure

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22068:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> FlinkKinesisConsumerTest.testPeriodicWatermark fails on azure
> -
>
> Key: FLINK-22068
> URL: https://issues.apache.org/jira/browse/FLINK-22068
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
> Fix For: 1.14.0
>
>
> {code}
> [ERROR] Tests run: 11, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 5.567 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumerTest
> [ERROR] 
> testPeriodicWatermark(org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumerTest)
>   Time elapsed: 0.845 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected: iterable containing [, ]
>  but: item 0: was 
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:956)
>   at org.junit.Assert.assertThat(Assert.java:923)
>   at 
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumerTest.testPeriodicWatermark(FlinkKinesisConsumerTest.java:988)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:310)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:298)
>   at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
>   at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:218)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:160)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:134)
>   at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
>   at 
> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:136)
> 

[jira] [Updated] (FLINK-16735) FlinkKafkaProducer should check that it is not null before sending a record

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16735:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> FlinkKafkaProducer should check that it is not null before sending a record
> ---
>
> Key: FLINK-16735
> URL: https://issues.apache.org/jira/browse/FLINK-16735
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Shangwen Tang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
> Attachments: image-2020-03-24-11-40-22-143.png
>
>
> In our user scenario, some users implemented the KafkaSerializationSchema and 
> sometimes returned a null record, resulting in a null pointer exception
> !image-2020-03-24-11-40-22-143.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16762) Relocation Beam dependency of PyFlink

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16762:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Relocation Beam dependency of PyFlink
> -
>
> Key: FLINK-16762
> URL: https://issues.apache.org/jira/browse/FLINK-16762
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Affects Versions: 1.10.0
>Reporter: sunjincheng
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.14.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Some users may already use beam on their own cluster, which may cause the 
> conflict between the beam jar package carried by pyflink and the jar of the 
> user cluster beam to a certain extent. So, I would like to relocation the 
> Beam dependency of PyFlink.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22466) KafkaSourceLegacyITCase.testOneToOneSources fail because the OperatorEvent lost

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22466:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> KafkaSourceLegacyITCase.testOneToOneSources fail because the OperatorEvent 
> lost
> ---
>
> Key: FLINK-22466
> URL: https://issues.apache.org/jira/browse/FLINK-22466
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17110=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=7010
> {code:java}
> 2021-04-23T14:31:37.1620668Z Apr 23 14:31:37 [INFO] Running 
> org.apache.flink.connector.kafka.source.KafkaSourceLegacyITCase
> 2021-04-23T14:32:27.0398155Z java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2021-04-23T14:32:27.0400673Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2021-04-23T14:32:27.0401550Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2021-04-23T14:32:27.0402365Z  at 
> org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:49)
> 2021-04-23T14:32:27.0403227Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runOneToOneExactlyOnceTest(KafkaConsumerTestBase.java:1009)
> 2021-04-23T14:32:27.0403937Z  at 
> org.apache.flink.connector.kafka.source.KafkaSourceLegacyITCase.testOneToOneSources(KafkaSourceLegacyITCase.java:77)
> 2021-04-23T14:32:27.0404881Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-04-23T14:32:27.0405293Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-04-23T14:32:27.0406792Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-04-23T14:32:27.0407333Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-04-23T14:32:27.0407743Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-04-23T14:32:27.0408318Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-04-23T14:32:27.0408784Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-04-23T14:32:27.0409246Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-04-23T14:32:27.0409742Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2021-04-23T14:32:27.0410251Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2021-04-23T14:32:27.0410727Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2021-04-23T14:32:27.0411065Z  at java.lang.Thread.run(Thread.java:748)
> 2021-04-23T14:32:27.0411430Z Caused by: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2021-04-23T14:32:27.0411931Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2021-04-23T14:32:27.0412631Z  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
> 2021-04-23T14:32:27.0413144Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2021-04-23T14:32:27.0413605Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2021-04-23T14:32:27.0414063Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-04-23T14:32:27.0414497Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-04-23T14:32:27.0415002Z  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237)
> 2021-04-23T14:32:27.0415526Z  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2021-04-23T14:32:27.0416026Z  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2021-04-23T14:32:27.0416498Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-04-23T14:32:27.0417403Z  at 
> 

[jira] [Updated] (FLINK-16716) Update Roadmap after Flink 1.10 release

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16716:
---
  Labels: auto-deprioritized-critical auto-deprioritized-major  (was: 
auto-deprioritized-critical stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Update Roadmap after Flink 1.10 release
> ---
>
> Key: FLINK-16716
> URL: https://issues.apache.org/jira/browse/FLINK-16716
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major
>
> The roadmap on the Flink website needs to be updated to reflect the new 
> features of Flink 1.10 and the planned features and improvements of future 
> releases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21142) Flink guava Dependence problem

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21142:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> Flink guava Dependence problem
> --
>
> Key: FLINK-21142
> URL: https://issues.apache.org/jira/browse/FLINK-21142
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hadoop Compatibility, Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: YUJIANBO
>Assignee: Timo Walther
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
>
> We set up a new Hadoop cluster, and we use the flink1.12.0 compiled by the 
> previous release-1.12.0 branch.If I add hive jar to flink/lib/, it will 
> report errors.
> *Operating environment:*
>      flink1.12.0 
>      Hadoop 3.3.0
>      hive 3.1.2
> *Flink run official demo shell: /tmp/yjb/buildjar/flink1.12.0/bin/flink run 
> -m yarn-cluster /usr/local/flink1.12.0/examples/streaming/WordCount.jar*
> If I put one of the jar *flink-sql-connector-hive-3.1.2_2.11-1.12.0.jar* or 
> *hive-exec-3.1.2.jar* in the Lib directory and execute the above shell, an 
> error will be reported  java.lang.NoSuchMethodError : com.google.common . 
> base.Preconditions.checkArgument (ZLjava/lang/String;Ljava/lang/Object;)V. 
> *We can see that it's the dependency conflict of guava.*
> *My cluster guava‘s version:*
>  /usr/local/hadoop-3.3.0/share/hadoop/yarn/csi/lib/guava-20.0.jar
>  /usr/local/hadoop-3.3.0/share/hadoop/common/lib/guava-27.0-jre.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/guava-20.0.jar
>  /usr/local/apache-hive-3.1.2-bin/lib/jersey-guava-2.25.1.jar
>  /usr/local/spark-3.0.1-bin-hadoop3.2/jars/guava-14.0.1.jar
> *Can you give me some advice?*
>  Thank you!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16799) add hive partition limit when read from hive

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16799:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> add hive partition limit when read from hive
> 
>
> Key: FLINK-16799
> URL: https://issues.apache.org/jira/browse/FLINK-16799
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Jun Zhang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.14.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> add a partition limit when read from hive , a query will not be executed if 
> it attempts to fetch more partitions per table than the limit configured. 
>  
>  To avoid full table scans



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16800) TypeMappingUtils#checkIfCompatible didn't deal with nested types

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16800:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> TypeMappingUtils#checkIfCompatible didn't deal with nested types
> 
>
> Key: FLINK-16800
> URL: https://issues.apache.org/jira/browse/FLINK-16800
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.14.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> the planner uses TypeMappingUtils#checkIfCompatible to validate logical 
> schema and physical schema are compatible when translate 
> CatalogSinkModifyOperation to Calcite relational expression.  The validation 
> didn't deal with nested types well, which could throw the following 
> ValidationException:
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.ValidationException:
> Type ARRAY> of table field 'old'
> does not match with the physical type ARRAY LEGACY('DECIMAL', 'DECIMAL')>> of the 'old' field of the TableSource return
> type.
> at
> org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:164)
> at
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:277)
> at
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:254)
> at
> org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:157)
> at org.apache.flink.table.types.logical.ArrayType.accept(ArrayType.java:110)
> at
> org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:254)
> at
> org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:160)
> at
> org.apache.flink.table.utils.TypeMappingUtils.lambda$computeInCompositeType$8(TypeMappingUtils.java:232)
> at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1321)
> at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
> at
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at
> org.apache.flink.table.utils.TypeMappingUtils.computeInCompositeType(TypeMappingUtils.java:214)
> at
> org.apache.flink.table.utils.TypeMappingUtils.computePhysicalIndices(TypeMappingUtils.java:192)
> at
> org.apache.flink.table.utils.TypeMappingUtils.computePhysicalIndicesOrTimeAttributeMarkers(TypeMappingUtils.java:112)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.computeIndexMapping(StreamExecTableSourceScan.scala:212)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:107)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:62)
> at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:62)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecExchange.translateToPlanInternal(StreamExecExchange.scala:84)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecExchange.translateToPlanInternal(StreamExecExchange.scala:44)
> at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecExchange.translateToPlan(StreamExecExchange.scala:44)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLimit.translateToPlanInternal(StreamExecLimit.scala:161)
> at
> 

[jira] [Updated] (FLINK-16883) No support for log4j2 configuration formats besides properties

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16883:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> No support for log4j2 configuration formats besides properties
> --
>
> Key: FLINK-16883
> URL: https://issues.apache.org/jira/browse/FLINK-16883
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Affects Versions: 1.11.0
>Reporter: Fabian Paul
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If `flink.console.sh` is used to start a Flink cluster the env java opts 
> precede the log settings.  
> ([link|https://github.com/apache/flink/blob/master/flink-dist/src/main/flink-bin/bin/flink-console.sh#L73])
> This way the log settings `log4.configurationFile` will always overwrite 
> previous keys. Since the `log4j.configurationFile` is set to 
> `log4j.properties`, it is not possible to leverage other formats than 
> properties for the configuration.
>  
> My proposal would be to switch the order of the configurations that the log 
> settings precede the env java opts. Users could then overwrite the default 
> file with their configurations.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22627) Remove SlotManagerImpl

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22627:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> Remove SlotManagerImpl
> --
>
> Key: FLINK-22627
> URL: https://issues.apache.org/jira/browse/FLINK-22627
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.14.0
>
>
> As the declarative resource management is completed (FLINK-10404) and the old 
> {{SlotPoolImpl}} is removed in FLINK-22477, it's time to remove the 
> {{SlotManagerImpl}} and
>  all related classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21345) NullPointerException LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala:157

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21345:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> NullPointerException 
> LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala:157
> --
>
> Key: FLINK-21345
> URL: https://issues.apache.org/jira/browse/FLINK-21345
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.1
> Environment: Planner: BlinkPlanner
> Flink Version: 1.12.1_2.11
> Java Version: 1.8
> OS: mac os
>Reporter: lynn1.zhang
>Assignee: lynn1.zhang
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.14.0
>
> Attachments: image-2021-02-10-16-00-45-553.png
>
>
> First Step: Create 2 Source Tables as below:
> {code:java}
> CREATE TABLE test_streaming(
>  vid BIGINT,
>  ts BIGINT,
>  proc AS proctime()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'test-streaming',
>  'properties.bootstrap.servers' = '127.0.0.1:9092',
>  'scan.startup.mode' = 'latest-offset',
>  'format' = 'json'
> );
> CREATE TABLE test_streaming2(
>  vid BIGINT,
>  ts BIGINT,
>  proc AS proctime()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'test-streaming2',
>  'properties.bootstrap.servers' = '127.0.0.1:9092',
>  'scan.startup.mode' = 'latest-offset',
>  'format' = 'json'
> );
> {code}
> Second Step: Create a TEMPORARY Table Function, function name:dim, key:vid, 
> timestamp:proctime()
> Third Step: test_streaming union all  test_streaming2 join dim like below:
> {code:java}
> SELECT r.vid,d.name,timestamp_from_long(r.ts)
> FROM (
> SELECT * FROM test_streaming UNION ALL SELECT * FROM test_streaming2
> ) AS r,
> LATERAL TABLE (dim(r.proc)) AS d
> WHERE r.vid = d.vid;
> {code}
> Exception Detail: (if only use test-streaming or test-streaming2 join 
> temporary table function, the program run ok)
> {code:java}
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.flink.table.planner.plan.rules.logical.LogicalCorrelateToJoinFromTemporalTableFunctionRule.getRelOptSchema(LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala:157)
>   at 
> org.apache.flink.table.planner.plan.rules.logical.LogicalCorrelateToJoinFromTemporalTableFunctionRule.onMatch(LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala:99)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:155)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:155)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)

[jira] [Closed] (FLINK-20208) Remove outdated in-progress files in StreamingFileSink

2021-06-10 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot closed FLINK-20208.
--
Resolution: Auto Closed

This issue was labeled "stale-minor" 7 ago and has not received any updates so 
I have gone ahead and closed it.  If you are still affected by this or would 
like to raise the priority of this ticket please re-open, removing the label 
"auto-closed" and raise the ticket priority accordingly.


> Remove outdated in-progress files in StreamingFileSink
> --
>
> Key: FLINK-20208
> URL: https://issues.apache.org/jira/browse/FLINK-20208
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.11.2
>Reporter: Alexander Trushev
>Priority: Minor
>  Labels: auto-closed
>
> Assume a job has StreamingFileSink with OnCheckpointRollingPolicy
> In the case:
>  # Acknowledged checkpoint
>  # Event is written to new .part-X-Y.inprogress.UUID1
>  # Job failure
>  # Job recovery from the checkpoint
>  # Event is written to new .part-X-Y.inprogress.UUID2
> we have the outdated part file .part-X-Y.inprogress.UUID1. Where X - subtask 
> index, Y - part counter.
> *Proposal*
>  Add method
> {code:java}
> boolean shouldRemoveOutdatedParts()
> {code}
> to RollingPolicy.
>  Add configurable parameter to OnCheckpointRollingPolicy and to 
> DefaultRollingPolicy that will be returned by shouldRemoveOutdatedParts() (by 
> default false)
> We can remove such outdated part files by the next algorithm while restoring 
> job from a checkpoint
>  # After buckets state initializing check shouldRemoveOutdatedParts. If true 
> then (2)
>  # For each bucket scan bucket directory
>  # If three conditions are true then remove part file:
>  part filename contains "inprogress";
>  subtask index from filename equals to current subtask index;
>  part counter from filename more than or equals to current max part counter.
> I propose to remove outdated files, because the similar proposal to overwrite 
> outdated files has not been implemented
> [https://issues.apache.org/jira/browse/FLINK-6|https://vk.com/away.php?to=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FFLINK-6_key=]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   >