[jira] [Updated] (FLINK-23310) Correct the `ModifyKindSetTrait` for `GroupWindowAggregate` when the input is an update stream

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23310:

Fix Version/s: 1.16.0

> Correct the `ModifyKindSetTrait` for `GroupWindowAggregate` when the input is 
> an update stream 
> ---
>
> Key: FLINK-23310
> URL: https://issues.apache.org/jira/browse/FLINK-23310
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.13.0
>Reporter: Shuo Cheng
>Priority: Minor
> Fix For: 1.15.0, 1.16.0
>
>
> Following FLINK-22781, currently group window supports update input stream, 
> just like unbounded aggregate, group window may also emit DELETE records, so 
> the `ModifyKindSetTrait` for group window should be modified as well.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22585) Add deprecated message when "-m yarn-cluster" is used

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22585:

Fix Version/s: 1.16.0

> Add deprecated message when "-m yarn-cluster" is used
> -
>
> Key: FLINK-22585
> URL: https://issues.apache.org/jira/browse/FLINK-22585
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Yang Wang
>Assignee: Rainie Li
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-assigned
> Fix For: 1.15.0, 1.16.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> The unified executor interface has been introduced for a long time, which 
> could be activated by "\--target 
> yarn-per-job/yarn-session/kubernetes-application". It is more descriptive and 
> clearer. We should try to deprecate "-m yarn-cluster" and suggest our users 
> to use the new CLI commands.
>  
> However, AFAIK, many companies are using some CLI commands to integrate with 
> their deployers. So we could not remove the "-m yarn-cluster" very soon. 
> Maybe we could do it in the release 2.0 since we could do some breaking 
> changes.
>  
> For now, I suggest to add the {{@Deprecated}} annotation and printing a WARN 
> log message when "-m yarn-cluster" is used. It is useful to let the users 
> know the long-term goals and migrate asap.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23921) WindowJoinITCase.testInnerJoinWithIsNotDistinctFromOnWTF is not stable

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23921:

Fix Version/s: 1.16.0

> WindowJoinITCase.testInnerJoinWithIsNotDistinctFromOnWTF is not stable
> --
>
> Key: FLINK-23921
> URL: https://issues.apache.org/jira/browse/FLINK-23921
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.14.0
>Reporter: Yangze Guo
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22639&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=9759



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-19935) Supports configure heap memory of sql-client to avoid OOM

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-19935:

Fix Version/s: 1.16.0

> Supports configure heap memory of sql-client to avoid OOM
> -
>
> Key: FLINK-19935
> URL: https://issues.apache.org/jira/browse/FLINK-19935
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Client
>Affects Versions: 1.11.2
>Reporter: harold.miao
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
> Attachments: image-2020-11-03-10-31-08-294.png
>
>
> hi 
> when use sql-client submit job,  the command below donot set JVM heap 
> pramameters. And cause OOM error in my production environment.
> exec $JAVA_RUN $JVM_ARGS "${log_setting[@]}" -classpath "`manglePathList 
> "$CC_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" 
> org.apache.flink.table.client.SqlClient "$@"
>  
> !image-2020-11-03-10-31-08-294.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21307) Revisit activation model of FlinkSecurityManager

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-21307:

Fix Version/s: 1.16.0

> Revisit activation model of FlinkSecurityManager
> 
>
> Key: FLINK-21307
> URL: https://issues.apache.org/jira/browse/FLINK-21307
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.13.0
>Reporter: Robert Metzger
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> In FLINK-15156, we introduced a feature that allows users to log or 
> completely disable calls to System.exit(). This feature is enabled for 
> certain threads / code sections intended to execute user-code.
> The activation of the security manager (for monitoring user calls to 
> System.exit() is currently not well-defined, and only implemented on a 
> best-effort basis.
> This ticket is to revisit the activation.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23867) FlinkKafkaInternalProducerITCase.testCommitTransactionAfterClosed fails with IllegalStateException

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23867:

Fix Version/s: 1.16.0

> FlinkKafkaInternalProducerITCase.testCommitTransactionAfterClosed fails with 
> IllegalStateException
> --
>
> Key: FLINK-23867
> URL: https://issues.apache.org/jira/browse/FLINK-23867
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Xintong Song
>Assignee: Fabian Paul
>Priority: Major
>  Labels: stale-assigned, test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22465&view=logs&j=72d4811f-9f0d-5fd0-014a-0bc26b72b642&t=c1d93a6a-ba91-515d-3196-2ee8019fbda7&l=6862
> {code}
> Aug 18 23:20:14 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 51.905 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase
> Aug 18 23:20:14 [ERROR] 
> testCommitTransactionAfterClosed(org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase)
>   Time elapsed: 7.848 s  <<< ERROR!
> Aug 18 23:20:14 java.lang.Exception: Unexpected exception, 
> expected but was
> Aug 18 23:20:14   at 
> org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:28)
> Aug 18 23:20:14   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Aug 18 23:20:14   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> Aug 18 23:20:14   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 18 23:20:14   at java.lang.Thread.run(Thread.java:748)
> Aug 18 23:20:14 Caused by: java.lang.AssertionError: The message should have 
> been successfully sent expected null, but 
> was:
> Aug 18 23:20:14   at org.junit.Assert.fail(Assert.java:88)
> Aug 18 23:20:14   at org.junit.Assert.failNotNull(Assert.java:755)
> Aug 18 23:20:14   at org.junit.Assert.assertNull(Assert.java:737)
> Aug 18 23:20:14   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.getClosedProducer(FlinkKafkaInternalProducerITCase.java:228)
> Aug 18 23:20:14   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.testCommitTransactionAfterClosed(FlinkKafkaInternalProducerITCase.java:177)
> Aug 18 23:20:14   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 18 23:20:14   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 18 23:20:14   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 18 23:20:14   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 18 23:20:14   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Aug 18 23:20:14   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Aug 18 23:20:14   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Aug 18 23:20:14   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Aug 18 23:20:14   at 
> org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:19)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18202) Introduce Protobuf format

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-18202:

Fix Version/s: 1.16.0

> Introduce Protobuf format
> -
>
> Key: FLINK-18202
> URL: https://issues.apache.org/jira/browse/FLINK-18202
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Reporter: Benchao Li
>Assignee: xingyuan cheng
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, sprint, 
> stale-assigned
> Fix For: 1.15.0, 1.16.0
>
> Attachments: image-2020-06-15-17-18-03-182.png
>
>
> PB[1] is a very famous and wildly used (de)serialization framework. The ML[2] 
> also has some discussions about this. It's a useful feature.
> This issue maybe needs some designs, or a FLIP.
> [1] [https://developers.google.com/protocol-buffers]
> [2] [http://apache-flink.147419.n8.nabble.com/Flink-SQL-UDF-td3725.html]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18624) Document CREATE TEMPORARY TABLE

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-18624:

Fix Version/s: 1.16.0

> Document CREATE TEMPORARY TABLE
> ---
>
> Key: FLINK-18624
> URL: https://issues.apache.org/jira/browse/FLINK-18624
> Project: Flink
>  Issue Type: New Feature
>  Components: Documentation
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.11.7, 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14222) Optimize for Python UDFs with all parameters are constant values

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-14222:

Fix Version/s: 1.16.0

> Optimize for Python UDFs with all parameters are constant values
> 
>
> Key: FLINK-14222
> URL: https://issues.apache.org/jira/browse/FLINK-14222
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Dian Fu
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> As discussed in [https://github.com/apache/flink/pull/9766], The Python UDFs 
> could be optimized to a constant value if it is deterministic.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24547) A function named row_kind() can get row_kind

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24547:

Fix Version/s: 1.16.0

> A function named row_kind() can get row_kind
> 
>
> Key: FLINK-24547
> URL: https://issues.apache.org/jira/browse/FLINK-24547
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: jocean.shi
>Priority: Not a Priority
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> Add a function can get row_kind. In many case we need cdc data rowKind, such 
> as agg rowKind or filter some rowKind data, the function return 
> "+I","-U","+U","-D",
> you can use this function as "select row_kind() as op from xxx".



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22235) Document stability concerns of flamegraphs for heavier jobs

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22235:

Fix Version/s: 1.16.0

> Document stability concerns of flamegraphs for heavier jobs
> ---
>
> Key: FLINK-22235
> URL: https://issues.apache.org/jira/browse/FLINK-22235
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / Web Frontend
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> The FlameGraph feature added in FLINK-13550 has some known scalability 
> issues, because it issues 1 RPC call per subtask. This may put a lot of 
> pressure on the RPC system, and as such it should be used with caution for 
> heavier jobs until we improved this a bit.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-15729) Table planner dependency instructions for executing in IDE can be improved

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-15729:

Fix Version/s: 1.16.0

> Table planner dependency instructions for executing in IDE can be improved
> --
>
> Key: FLINK-15729
> URL: https://issues.apache.org/jira/browse/FLINK-15729
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Minor
> Fix For: 1.15.0, 1.16.0
>
>
> In the docs: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/#table-program-dependencies
> For it to work in the IDE, it would be clearer to add that in the IDE, you 
> would additionally need to enable the option to `Include dependencies with 
> "provided" scope`.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20091) Introduce avro.ignore-parse-errors for AvroRowDataDeserializationSchema

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-20091:

Fix Version/s: 1.16.0

> Introduce avro.ignore-parse-errors  for AvroRowDataDeserializationSchema
> 
>
> Key: FLINK-20091
> URL: https://issues.apache.org/jira/browse/FLINK-20091
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.0
>Reporter: hailong wang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> Introduce avro.ignore-parse-errors to allow users to skip rows with parsing 
> errors instead of failing when deserializing avro format data.
> This is useful when there are dirty data, for without this option, users can 
> not skip the dirty row.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26144) SavepointFormatITCase did NOT fail with changelog.enabled randomized

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26144:

Fix Version/s: 1.16.0

> SavepointFormatITCase did NOT fail with changelog.enabled randomized 
> -
>
> Key: FLINK-26144
> URL: https://issues.apache.org/jira/browse/FLINK-26144
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> Extracted from FLINK-26093.
>  
> SavepointFormatITCase had an issue when manually running with ChangeogBackend 
> enabled (now resolved, see FLINK-26093).
> However, it did NOT fail on master where checkpointing.changelog is set to 
> random in pom.xml.
>  
> This "random" translates to
> {code:java}
> randomize(conf, ENABLE_STATE_CHANGE_LOG, true, false);{code}
> If only  "true" is left then it did fail.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20849) Improve JavaDoc and logging of new KafkaSource

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-20849:

Fix Version/s: 1.16.0

> Improve JavaDoc and logging of new KafkaSource
> --
>
> Key: FLINK-20849
> URL: https://issues.apache.org/jira/browse/FLINK-20849
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.12.8, 1.16.0
>
>
> Some JavaDoc and logging message of the new KafkaSource should be more 
> descriptive to provide more information to users. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-19527) Update SQL Pages

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-19527:

Fix Version/s: 1.16.0

> Update SQL Pages
> 
>
> Key: FLINK-19527
> URL: https://issues.apache.org/jira/browse/FLINK-19527
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Seth Wiesman
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> SQL
> Goal: Show users the main features early and link to concepts if necessary.
> How to use SQL? Intended for users with SQL knowledge.
> Overview
> Getting started with link to more detailed execution section.
> Full Reference
> Available operations in SQL as a table. This location allows to further 
> split the page in the future if we think an operation needs more space 
> without affecting the top-level structure.
> Data Definition
> Explain special SQL syntax around DDL.
> Pattern Matching
> Make pattern matching more visible.
> ... more features in the future 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24865) Support MATCH_RECOGNIZE in Batch mode

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24865:

Fix Version/s: 1.16.0

> Support MATCH_RECOGNIZE in Batch mode
> -
>
> Key: FLINK-24865
> URL: https://issues.apache.org/jira/browse/FLINK-24865
> Project: Flink
>  Issue Type: Sub-task
>  Components: Library / CEP
>Affects Versions: 1.15.0
>Reporter: Martijn Visser
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.15.0, 1.16.0
>
>
> Currently MATCH_RECOGNIZE only works in Streaming mode. It should also be 
> supported in Batch mode



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22065) Improve the error message when input invalid command in the sql client

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22065:

Fix Version/s: 1.16.0

> Improve the error message when input invalid command in the sql client
> --
>
> Key: FLINK-22065
> URL: https://issues.apache.org/jira/browse/FLINK-22065
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.0
>Reporter: Shengkai Fang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> !https://static.dingtalk.com/media/lALPD26eOprT2ztwzQWg_1440_112.png_720x720g.jpg?renderWidth=1440&renderHeight=112&renderOrientation=1&isLocal=0&bizType=im!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18634) FlinkKafkaProducerITCase.testRecoverCommittedTransaction failed with "Timeout expired after 60000milliseconds while awaiting InitProducerId"

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-18634:

Fix Version/s: 1.16.0

> FlinkKafkaProducerITCase.testRecoverCommittedTransaction failed with "Timeout 
> expired after 6milliseconds while awaiting InitProducerId"
> 
>
> Key: FLINK-18634
> URL: https://issues.apache.org/jira/browse/FLINK-18634
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0, 1.13.0, 1.14.0, 1.15.0
>Reporter: Dian Fu
>Assignee: Fabian Paul
>Priority: Major
>  Labels: auto-unassigned, stale-assigned, test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4590&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=684b1416-4c17-504e-d5ab-97ee44e08a20
> {code}
> 2020-07-17T11:43:47.9693015Z [ERROR] Tests run: 12, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 269.399 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 2020-07-17T11:43:47.9693862Z [ERROR] 
> testRecoverCommittedTransaction(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
>   Time elapsed: 60.679 s  <<< ERROR!
> 2020-07-17T11:43:47.9694737Z org.apache.kafka.common.errors.TimeoutException: 
> org.apache.kafka.common.errors.TimeoutException: Timeout expired after 
> 6milliseconds while awaiting InitProducerId
> 2020-07-17T11:43:47.9695376Z Caused by: 
> org.apache.kafka.common.errors.TimeoutException: Timeout expired after 
> 6milliseconds while awaiting InitProducerId
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-17987) KafkaITCase.testStartFromGroupOffsets fails with SchemaException: Error reading field

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-17987:

Fix Version/s: 1.16.0

> KafkaITCase.testStartFromGroupOffsets fails with SchemaException: Error 
> reading field
> -
>
> Key: FLINK-17987
> URL: https://issues.apache.org/jira/browse/FLINK-17987
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.12.0, 1.14.0
>Reporter: Robert Metzger
>Assignee: Fabian Paul
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-assigned, 
> test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2276&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=684b1416-4c17-504e-d5ab-97ee44e08a20
> {code}
> 2020-05-27T13:05:24.5355101Z Test 
> testStartFromGroupOffsets(org.apache.flink.streaming.connectors.kafka.KafkaITCase)
>  failed with:
> 2020-05-27T13:05:24.5355935Z 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'api_versions': Error reading array of size 131084, only 12 bytes available
> 2020-05-27T13:05:24.5356501Z  at 
> org.apache.kafka.common.protocol.types.Schema.read(Schema.java:77)
> 2020-05-27T13:05:24.5356911Z  at 
> org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:308)
> 2020-05-27T13:05:24.5357350Z  at 
> org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:152)
> 2020-05-27T13:05:24.5357838Z  at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:687)
> 2020-05-27T13:05:24.5358333Z  at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:811)
> 2020-05-27T13:05:24.5358840Z  at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:544)
> 2020-05-27T13:05:24.5359297Z  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
> 2020-05-27T13:05:24.5359832Z  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
> 2020-05-27T13:05:24.5360659Z  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:227)
> 2020-05-27T13:05:24.5361292Z  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:161)
> 2020-05-27T13:05:24.5361885Z  at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:245)
> 2020-05-27T13:05:24.5362454Z  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:657)
> 2020-05-27T13:05:24.5363089Z  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1425)
> 2020-05-27T13:05:24.5363558Z  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1384)
> 2020-05-27T13:05:24.5364130Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl$KafkaOffsetHandlerImpl.setCommittedOffset(KafkaTestEnvironmentImpl.java:444)
> 2020-05-27T13:05:24.5365027Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runStartFromGroupOffsets(KafkaConsumerTestBase.java:554)
> 2020-05-27T13:05:24.5365596Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testStartFromGroupOffsets(KafkaITCase.java:158)
> 2020-05-27T13:05:24.5366035Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-27T13:05:24.5366425Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-27T13:05:24.5366871Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-27T13:05:24.5367285Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-27T13:05:24.5367675Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-27T13:05:24.5368142Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-27T13:05:24.5368655Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-27T13:05:24.5369103Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-27T13:05:24.5369590Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-05-27T13:05:24.5370094Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-05-27T13:05:24.5370543Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-05-27T13:05:24.

[jira] [Updated] (FLINK-20687) Missing 'yarn-application' target in CLI help message

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-20687:

Fix Version/s: 1.16.0

> Missing 'yarn-application' target in CLI help message
> -
>
> Key: FLINK-20687
> URL: https://issues.apache.org/jira/browse/FLINK-20687
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Affects Versions: 1.12.0
>Reporter: Ruguo Yu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.12.8, 1.16.0
>
> Attachments: image-2020-12-20-21-48-18-391.png, 
> image-2020-12-20-22-02-01-312.png
>
>
> Missing 'yarn-application' target in CLI help message when i enter command 
> 'flink run-application -h', as follows:
> !image-2020-12-20-21-48-18-391.png|width=516,height=372!
> The target name is obtained through SPI, and I checked the SPI 
> META-INF/servicesis is correct.
>  
> Next i put flink-shaded-hadoop-*-.jar to flink lib derectory or set 
> HADOOP_CLASSPATH, it can show 'yarn-application', as follows:
> !image-2020-12-20-22-02-01-312.png|width=808,height=507!
>  However, I think it is reasonable to show 'yarn-application' without any 
> action. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21953) Add documentation on batch mode support of Python DataStream API

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-21953:

Fix Version/s: 1.16.0

> Add documentation on batch mode support of Python DataStream API
> 
>
> Key: FLINK-21953
> URL: https://issues.apache.org/jira/browse/FLINK-21953
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Documentation
>Reporter: Dian Fu
>Priority: Major
>  Labels: auto-unassigned
> Fix For: 1.15.0, 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22607) Examples use deprecated AscendingTimestampExtractor

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22607:

Fix Version/s: 1.16.0

> Examples use deprecated AscendingTimestampExtractor
> ---
>
> Key: FLINK-22607
> URL: https://issues.apache.org/jira/browse/FLINK-22607
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Affects Versions: 1.12.0, 1.12.1, 1.12.2, 1.13.0, 1.12.3
>Reporter: Al-assad
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> The streaming examples 
> [TopSpeedWindowing|https://github.com/apache/flink/blob/master/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/TopSpeedWindowing.java]
>  demonstrates that the generating watermarks function part uses the 
> deprecated 
> {color:#0747a6}_DataStream#assignTimestampsAndWatermarks(AscendingTimestampExtractor)_{color},
> which is recommended in the relevant [Flink 
> docs|https://ci.apache.org/projects/flink/flink-docs-stable/docs/dev/datastream/event-time/generating_watermarks/#using-watermark-strategies]
>  is recommended to use 
> {color:#0747a6}_DataStream#assignTimestampsAndWatermarks(WatermarkStrategy)_{color}
>  instead.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20280) Support batch mode for Python DataStream API

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-20280:

Fix Version/s: 1.16.0

> Support batch mode for Python DataStream API
> 
>
> Key: FLINK-20280
> URL: https://issues.apache.org/jira/browse/FLINK-20280
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Python
>Reporter: Dian Fu
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> Currently, it still doesn't support batch mode for the Python DataStream API.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23778) UpsertKafkaTableITCase.testKafkaSourceSinkWithKeyAndFullValue hangs on Azure

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23778:

Fix Version/s: 1.16.0

> UpsertKafkaTableITCase.testKafkaSourceSinkWithKeyAndFullValue hangs on Azure
> 
>
> Key: FLINK-23778
> URL: https://issues.apache.org/jira/browse/FLINK-23778
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: stale-assigned, test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> The test {{UpsertKafkaTableITCase.testKafkaSourceSinkWithKeyAndFullValue}} 
> hangs on Azure.
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22123&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=15a22db7-8faa-5b34-3920-d33c9f0ca23c&l=8847



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24259) Exception message when Kafka topic is null in FlinkKafkaConsumer

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24259:

Fix Version/s: 1.16.0

> Exception message when Kafka topic is null in FlinkKafkaConsumer
> 
>
> Key: FLINK-24259
> URL: https://issues.apache.org/jira/browse/FLINK-24259
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: connector, consumer, exception, kafka, 
> pull-request-available, stale-assigned
> Fix For: 1.15.0, 1.16.0
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> If the topic name is null for the FlinkKafkaConsumer we get an Kafka 
> exception with the message:
> {quote}{{Error computing size for field 'topics': Error computing size for 
> field 'name': Missing value for field 'name' which has no default value.}}
> {quote}
> The KafkaTopicDescriptor can check that the topic name empty, blank or null 
> and provide more informative message.
>   



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18627) Get unmatch filter method records to side output

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-18627:

Fix Version/s: 1.16.0

> Get unmatch filter method records to side output
> 
>
> Key: FLINK-18627
> URL: https://issues.apache.org/jira/browse/FLINK-18627
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Roey Shem Tov
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> Unmatch records to filter functions should send somehow to side output.
> Example:
>  
> {code:java}
> datastream
> .filter(i->i%2==0)
> .sideOutput(oddNumbersSideOutput);
> {code}
>  
>  
> That's way we can filter multiple times and send the filtered records to our 
> side output instead of dropping it immediatly, it can be useful in many ways.
>  
> What do you think?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-17405) add test cases for cancel job in SQL client

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-17405:

Fix Version/s: 1.16.0

> add test cases for cancel job in SQL client
> ---
>
> Key: FLINK-17405
> URL: https://issues.apache.org/jira/browse/FLINK-17405
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Reporter: godfrey he
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> as discussed in [FLINK-15669| 
> https://issues.apache.org/jira/browse/FLINK-15669], we can re-add some tests 
> to verify cancel job logic in SQL client.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22222) Global config is logged twice by the client for batch jobs

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-2:

Fix Version/s: 1.16.0

> Global config is logged twice by the client for batch jobs
> --
>
> Key: FLINK-2
> URL: https://issues.apache.org/jira/browse/FLINK-2
> Project: Flink
>  Issue Type: Sub-task
>  Components: Command Line Client
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> Global config is loaded twice by the client for batch jobs.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24078) Remove brackets around variables

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24078:

Fix Version/s: 1.16.0

> Remove brackets around variables
> 
>
> Key: FLINK-24078
> URL: https://issues.apache.org/jira/browse/FLINK-24078
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Metrics
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> Internally, variable keys are stored with brackets, e.g., {{}}.
> In practice all reporters will filter these characters in one way or the 
> other, and it overall is a subtle trap that we keep running into without 
> providing any benefit.
> We should get rid of them.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23533) Split InputSelection into public and internal parts

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23533:

Fix Version/s: 1.16.0

> Split InputSelection into public and internal parts
> ---
>
> Key: FLINK-23533
> URL: https://issues.apache.org/jira/browse/FLINK-23533
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Task
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> The InputSelection is a {{PublicEvolving}} class but exposes methods that 
> should not be part of a public contract, such as e.g. 
> {{fairSelectNextIndexOutOf2}}.
> InputSelection should contain only the builder for selecting the 
> input/constructing a selected mask.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24854) StateHandleSerializationTest unit test error

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24854:

Fix Version/s: 1.16.0

> StateHandleSerializationTest unit test error
> 
>
> Key: FLINK-24854
> URL: https://issues.apache.org/jira/browse/FLINK-24854
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.14.0
>Reporter: zlzhang0122
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> StateHandleSerializationTest.ensureStateHandlesHaveSerialVersionUID() will 
> fail beacuse RocksDBStateDownloaderTest has an anonymous class of 
> StreamStateHandle, and this class is a subtype of StateObject, since the 
> class is an anonymous, the assertFalse will fail as well as this unit test.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14028) Support to configure the log level of the Python user-defined functions

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-14028:

Fix Version/s: 1.16.0

> Support to configure the log level of the Python user-defined functions
> ---
>
> Key: FLINK-14028
> URL: https://issues.apache.org/jira/browse/FLINK-14028
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Dian Fu
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> Beam's portability framework has provided the ability to collecting log to 
> operator from the Python workers. Currently the log level INFO is set by 
> default. It should support to configure the log level of the Python 
> user-defined functions.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21769) Encapsulate component meta data

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-21769:

Fix Version/s: 1.16.0

> Encapsulate component meta data
> ---
>
> Key: FLINK-21769
> URL: https://issues.apache.org/jira/browse/FLINK-21769
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: auto-unassigned, stale-assigned
> Fix For: 1.15.0, 1.16.0
>
>
> Encapsulate jm/tm/job/task/operator meta data in simple pojos that can be 
> passed around, reducing the dependency on strict metric group hierarchies.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23942) Python documentation redirects to shared pages should activate python tabs

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23942:

Fix Version/s: 1.16.0

> Python documentation redirects to shared pages should activate python tabs
> --
>
> Key: FLINK-23942
> URL: https://issues.apache.org/jira/browse/FLINK-23942
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Documentation
>Reporter: Chesnay Schepler
>Priority: Minor
> Fix For: 1.15.0, 1.16.0
>
>
> The Python Documentation contains a few items that should just link to the 
> plain DataStream documentation which contain tabs.
> Putting aside that the experience of switching between these places is quite 
> a jarring one, we should make sure that users following such a redirection 
> should immediately see the Python version of code samples.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24721) Update the based on the Maven official indication

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24721:

Fix Version/s: 1.16.0

> Update the  based on the Maven official indication
> 
>
> Key: FLINK-24721
> URL: https://issues.apache.org/jira/browse/FLINK-24721
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Minor
>  Labels: stale-assigned, starter
> Fix For: 1.15.0, 1.16.0
>
>
> Example:
>  org.apache.flink
>  flink-formats
>  1.15-SNAPSHOT
>  *..*
>  
> Intellij Idea will point out the issue.
> There are two solutions follow the official indication. 
> [http://maven.apache.org/ref/3.3.9/maven-model/maven.html#class_parent].
> option 1: 
>  *../pom.xml*
> option 2: 
>  since we use the default value of relativePath, the whole line could be 
> saved.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22344) KafkaTableITCase.testKafkaSourceSinkWithKeyAndFullValue times out

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22344:

Fix Version/s: 1.16.0

> KafkaTableITCase.testKafkaSourceSinkWithKeyAndFullValue times out
> -
>
> Key: FLINK-22344
> URL: https://issues.apache.org/jira/browse/FLINK-22344
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16731&view=logs&j=72d4811f-9f0d-5fd0-014a-0bc26b72b642&t=c1d93a6a-ba91-515d-3196-2ee8019fbda7&l=6619
> {code}
> Apr 18 21:49:28 [ERROR] testKafkaSourceSinkWithKeyAndFullValue[legacy = 
> false, format = 
> csv](org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase)  
> Time elapsed: 30.121 s  <<< ERROR!
> Apr 18 21:49:28 org.junit.runners.model.TestTimedOutException: test timed out 
> after 30 seconds
> Apr 18 21:49:28   at java.lang.Thread.sleep(Native Method)
> Apr 18 21:49:28   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> Apr 18 21:49:28   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> Apr 18 21:49:28   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> Apr 18 21:49:28   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> Apr 18 21:49:28   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:351)
> Apr 18 21:49:28   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestUtils.collectRows(KafkaTableTestUtils.java:52)
> Apr 18 21:49:28   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testKafkaSourceSinkWithKeyAndFullValue(KafkaTableITCase.java:538)
> Apr 18 21:49:28   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Apr 18 21:49:28   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 18 21:49:28   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 18 21:49:28   at java.lang.reflect.Method.invoke(Method.java:498)
> Apr 18 21:49:28   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Apr 18 21:49:28   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 18 21:49:28   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Apr 18 21:49:28   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Apr 18 21:49:28   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Apr 18 21:49:28   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Apr 18 21:49:28   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 18 21:49:28   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> Apr 18 21:49:28   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Apr 18 21:49:28   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22520) KafkaSourceLegacyITCase.testMultipleSourcesOnePartition hangs

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22520:

Fix Version/s: 1.16.0

> KafkaSourceLegacyITCase.testMultipleSourcesOnePartition hangs
> -
>
> Key: FLINK-22520
> URL: https://issues.apache.org/jira/browse/FLINK-22520
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Guowei Ma
>Assignee: Fabian Paul
>Priority: Major
>  Labels: stale-assigned, test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> There is no any error messages.
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17363&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5&l=42023
> {code:java}
> "main" #1 prio=5 os_prio=0 tid=0x7f4d3400b000 nid=0x203f waiting on 
> condition [0x7f4d3be2e000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xa68f3b68> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:49)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runMultipleSourcesOnePartitionExactlyOnceTest(KafkaConsumerTestBase.java:1112)
>   at 
> org.apache.flink.connector.kafka.source.KafkaSourceLegacyITCase.testMultipleSourcesOnePartition(KafkaSourceLegacyITCase.java:87)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-15130) Drop "RequiredParameters" and "Options"

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-15130:

Fix Version/s: 1.16.0

> Drop "RequiredParameters" and "Options"
> ---
>
> Key: FLINK-15130
> URL: https://issues.apache.org/jira/browse/FLINK-15130
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / DataSet, API / DataStream
>Affects Versions: 1.9.1
>Reporter: Stephan Ewen
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> As per mailing list discussion, we want to drop those because they are unused 
> redundant code.
> There are many options for command line parsing, including one in Flink 
> (Parameter Tool).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-16023) jdbc connector's 'connector.table' property should be optional rather than required

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-16023:

Fix Version/s: 1.16.0

> jdbc connector's 'connector.table' property should be optional rather than 
> required
> ---
>
> Key: FLINK-16023
> URL: https://issues.apache.org/jira/browse/FLINK-16023
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> jdbc connector's 'connector.table' property should be optional rather than 
> required.
> connector should assume the table name in dbms is the same as that in Flink 
> when this property is not present
> The fundamental reason is that such a design didn't consider integration with 
> catalogs. Once introduced catalog, the flink table's name should be just the 
> 'table''s name in corresponding external system. 
> cc [~ykt836]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22834) KafkaITCase.testBigRecordJob times out on azure

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22834:

Fix Version/s: 1.16.0

>  KafkaITCase.testBigRecordJob times out on azure
> 
>
> Key: FLINK-22834
> URL: https://issues.apache.org/jira/browse/FLINK-22834
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Assignee: Fabian Paul
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-assigned, test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18481&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5&l=6992
> {code}
> java.lang.InterruptedException
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:347)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:49)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runBigRecordTestTopology(KafkaConsumerTestBase.java:1471)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testBigRecordJob(KafkaITCase.java:119)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> Jun 01 07:06:39 [ERROR] Tests run: 23, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 214.086 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase
> Jun 01 07:06:39 [ERROR] 
> testBigRecordJob(org.apache.flink.streaming.connectors.kafka.KafkaITCase)  
> Time elapsed: 60.015 s  <<< ERROR!
> Jun 01 07:06:39 org.junit.runners.model.TestTimedOutException: test timed out 
> after 6 milliseconds
> Jun 01 07:06:39   at sun.misc.Unsafe.park(Native Method)
> Jun 01 07:06:39   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Jun 01 07:06:39   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> Jun 01 07:06:39   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> Jun 01 07:06:39   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> Jun 01 07:06:39   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Jun 01 07:06:39   at 
> org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:49)
> Jun 01 07:06:39   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runBigRecordTestTopology(KafkaConsumerTestBase.java:1471)
> Jun 01 07:06:39   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testBigRecordJob(KafkaITCase.java:119)
> Jun 01 07:06:39   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 01 07:06:39   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 01 07:06:39   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 01 07:06:39   at java.lang.reflect.Method.invoke(Method.java:498)
> Jun 01 07:06:39   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jun 01 07:06:39   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jun 01 07:06:39   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jun 01 07:06:39   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jun 01 07:06:39   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Jun 01 07:06:39   at 
> org.junit.internal.runners.statements.FailOnTimeo

[jira] [Updated] (FLINK-22742) Lookup join condition with process time throws org.codehaus.commons.compiler.CompileException

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22742:

Fix Version/s: 1.16.0

> Lookup join condition with process time throws 
> org.codehaus.commons.compiler.CompileException
> -
>
> Key: FLINK-22742
> URL: https://issues.apache.org/jira/browse/FLINK-22742
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.0, 1.13.0, 1.14.0
>Reporter: Caizhi Weng
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> Add the following test case to 
> {{org.apache.flink.table.api.TableEnvironmentITCase}} to reproduce this bug.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   val id1 = TestValuesTableFactory.registerData(
> Seq(Row.of("abc", LocalDateTime.of(2000, 1, 1, 0, 0
>   val ddl1 =
> s"""
>|CREATE TABLE Ta (
>|  id VARCHAR,
>|  ts TIMESTAMP,
>|  proc AS PROCTIME()
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$id1',
>|  'bounded' = 'true'
>|)
>|""".stripMargin
>   tEnv.executeSql(ddl1)
>   val id2 = TestValuesTableFactory.registerData(
> Seq(Row.of("abc", LocalDateTime.of(2000, 1, 2, 0, 0
>   val ddl2 =
> s"""
>|CREATE TABLE Tb (
>|  id VARCHAR,
>|  ts TIMESTAMP
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$id2',
>|  'bounded' = 'true'
>|)
>|""".stripMargin
>   tEnv.executeSql(ddl2)
>   val it = tEnv.executeSql(
> """
>   |SELECT * FROM Ta AS t1
>   |INNER JOIN Tb FOR SYSTEM_TIME AS OF t1.proc AS t2
>   |ON t1.id = t2.id
>   |WHERE CAST(coalesce(t1.ts, t2.ts) AS VARCHAR) >= 
> CONCAT(DATE_FORMAT(t1.proc, '-MM-dd'), ' 00:00:00')
>   |""".stripMargin).collect()
>   while (it.hasNext) {
> System.out.println(it.next())
>   }
> }
> {code}
> The exception stack is
> {code}
> /* 1 */
> /* 2 */  public class JoinTableFuncCollector$25 extends 
> org.apache.flink.table.runtime.collector.TableFunctionCollector {
> /* 3 */
> /* 4 */org.apache.flink.table.data.GenericRowData out = new 
> org.apache.flink.table.data.GenericRowData(2);
> /* 5 */org.apache.flink.table.data.utils.JoinedRowData joinedRow$9 = new 
> org.apache.flink.table.data.utils.JoinedRowData();
> /* 6 */
> /* 7 */private final org.apache.flink.table.data.binary.BinaryStringData 
> str$17 = 
> org.apache.flink.table.data.binary.BinaryStringData.fromString("-MM-dd");
> /* 8 */   
> /* 9 */private static final java.util.TimeZone timeZone =
> /* 10 */ java.util.TimeZone.getTimeZone("Asia/Shanghai");
> /* 11 */
> /* 12 */private final org.apache.flink.table.data.binary.BinaryStringData 
> str$20 = org.apache.flink.table.data.binary.BinaryStringData.fromString(" 
> 00:00:00");
> /* 13 */   
> /* 14 */
> /* 15 */public JoinTableFuncCollector$25(Object[] references) throws 
> Exception {
> /* 16 */  
> /* 17 */}
> /* 18 */
> /* 19 */@Override
> /* 20 */public void open(org.apache.flink.configuration.Configuration 
> parameters) throws Exception {
> /* 21 */  
> /* 22 */}
> /* 23 */
> /* 24 */@Override
> /* 25 */public void collect(Object record) throws Exception {
> /* 26 */  org.apache.flink.table.data.RowData in1 = 
> (org.apache.flink.table.data.RowData) getInput();
> /* 27 */  org.apache.flink.table.data.RowData in2 = 
> (org.apache.flink.table.data.RowData) record;
> /* 28 */  
> /* 29 */  org.apache.flink.table.data.binary.BinaryStringData field$7;
> /* 30 */boolean isNull$7;
> /* 31 */org.apache.flink.table.data.TimestampData field$8;
> /* 32 */boolean isNull$8;
> /* 33 */org.apache.flink.table.data.TimestampData field$10;
> /* 34 */boolean isNull$10;
> /* 35 */boolean isNull$13;
> /* 36 */org.apache.flink.table.data.binary.BinaryStringData result$14;
> /* 37 */org.apache.flink.table.data.TimestampData field$15;
> /* 38 */boolean isNull$15;
> /* 39 */org.apache.flink.table.data.TimestampData result$16;
> /* 40 */boolean isNull$18;
> /* 41 */org.apache.flink.table.data.binary.BinaryStringData result$19;
> /* 42 */boolean isNull$21;
> /* 43 */org.apache.flink.table.data.binary.BinaryStringData result$22;
> /* 44 */boolean isNull$23;
> /* 45 */boolean result$24;
> /* 46 */  isNull$15 = in1.isNullAt(2);
> /* 47 */field$15 = null;
> /* 48 */if (!isNull$15) {
> /* 49 */  field$15 = in1.getTimestamp(2, 3);
> /* 50 */}
> /* 51 */isNull$8 = in2.isNullAt(1);
> /* 52 */field$8 = null;
> /* 53 */if (!isNull$8) {
> /* 54 */  field$8 = in2.getTimestamp(1, 6);
> /* 55 */}
> /* 56 */isNu

[jira] [Updated] (FLINK-19263) Enforce alphabetical order in configuration option docs

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-19263:

Fix Version/s: 1.16.0

> Enforce alphabetical order in configuration option docs
> ---
>
> Key: FLINK-19263
> URL: https://issues.apache.org/jira/browse/FLINK-19263
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / Configuration
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> The ConfigDocsGenerator sorts options alphabetically, however there are no 
> checks to ensure that the generated files adhere to that order.
> This is a problem because time and time again these files are manually 
> modified, breaking the order, causing other PRs that then use the generator 
> to include unrelated changes.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23350) Write doc for change log disorder by special operators

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23350:

Fix Version/s: 1.16.0

> Write doc for change log disorder by special operators
> --
>
> Key: FLINK-23350
> URL: https://issues.apache.org/jira/browse/FLINK-23350
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> (Copy from FLINK-20374) We can start documenting the change log behavior of 
> the planner in our docs as part of this FLINK-20374. Users should get more 
> insights what the planner is doing. It is too much of a black box right now. 
> It is very difficult to interpret why and when +U,-U or special operators are 
> inserted into the pipeline.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22209) Make StreamTask#triggerCheckpointAsync return void

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22209:

Fix Version/s: 1.16.0

> Make StreamTask#triggerCheckpointAsync return void
> --
>
> Key: FLINK-22209
> URL: https://issues.apache.org/jira/browse/FLINK-22209
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> The return value of {{StreamTask#triggerCheckpointAsync}} is ignored in 
> production code. We should remove it to decrease chances for confustion.
> We must make sure we do not lose test coverage with it. A lot of tests depend 
> on that return value.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25291) Add failure cases in DataStream source and sink suite of connector testing framework

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25291:

Fix Version/s: 1.16.0

> Add failure cases in DataStream source and sink suite of connector testing 
> framework
> 
>
> Key: FLINK-25291
> URL: https://issues.apache.org/jira/browse/FLINK-25291
> Project: Flink
>  Issue Type: Improvement
>  Components: Test Infrastructure
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-16799) add hive partition limit when read from hive

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-16799:

Fix Version/s: 1.16.0

> add hive partition limit when read from hive
> 
>
> Key: FLINK-16799
> URL: https://issues.apache.org/jira/browse/FLINK-16799
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Jun Zhang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> add a partition limit when read from hive , a query will not be executed if 
> it attempts to fetch more partitions per table than the limit configured. 
>  
>  To avoid full table scans



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25587) HiveCatalogITCase crashed on Azure with exit code 239

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25587:

Fix Version/s: 1.16.0

> HiveCatalogITCase crashed on Azure with exit code 239
> -
>
> Key: FLINK-25587
> URL: https://issues.apache.org/jira/browse/FLINK-25587
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Connectors / Hive
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> {code:java}
> Jan 10 05:56:47 [ERROR] Please refer to 
> /__w/1/s/flink-connectors/flink-connector-hive/target/surefire-reports for 
> the individual test results.
> Jan 10 05:56:47 [ERROR] Please refer to dump files (if any exist) 
> [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> Jan 10 05:56:47 [ERROR] ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> Jan 10 05:56:47 [ERROR] Command was /bin/sh -c cd 
> /__w/1/s/flink-connectors/flink-connector-hive/target && 
> /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /__w/1/s/flink-connectors/flink-connector-hive/target/surefire/surefirebooter4268791554037437993.jar
>  /__w/1/s/flink-connectors/flink-connector-hive/target/surefire 
> 2022-01-10T05-33-55_476-jvmRun2 surefire485701932125419585tmp 
> surefire_501275439895774136096tmp
> Jan 10 05:56:47 [ERROR] Error occurred in starting fork, check output in log
> Jan 10 05:56:47 [ERROR] Process Exit Code: 239
> Jan 10 05:56:47 [ERROR] Crashed tests:
> Jan 10 05:56:47 [ERROR] org.apache.flink.table.catalog.hive.HiveCatalogITCase
> Jan 10 05:56:47 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> Jan 10 05:56:47 [ERROR] Command was /bin/sh -c cd 
> /__w/1/s/flink-connectors/flink-connector-hive/target && 
> /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /__w/1/s/flink-connectors/flink-connector-hive/target/surefire/surefirebooter4268791554037437993.jar
>  /__w/1/s/flink-connectors/flink-connector-hive/target/surefire 
> 2022-01-10T05-33-55_476-jvmRun2 surefire485701932125419585tmp 
> surefire_501275439895774136096tmp
> Jan 10 05:56:47 [ERROR] Error occurred in starting fork, check output in log
> Jan 10 05:56:47 [ERROR] Process Exit Code: 239
> Jan 10 05:56:47 [ERROR] Crashed tests:
> Jan 10 05:56:47 [ERROR] org.apache.flink.table.catalog.hive.HiveCatalogITCase
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1314)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1159)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:932)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155)
> Jan 10 05:56:47 [ERROR] at 
> org.apache.maven.cli.MavenCli.exec

[jira] [Updated] (FLINK-26369) Translate the part zh-page mixed with not be translated.

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26369:

Fix Version/s: 1.16.0

> Translate the part zh-page mixed with not be translated. 
> -
>
> Key: FLINK-26369
> URL: https://issues.apache.org/jira/browse/FLINK-26369
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.15.0
>Reporter: Aiden Gong
>Priority: Minor
> Fix For: 1.15.0, 1.16.0
>
>
> Because 
> [FLINK-26296|https://issues.apache.org/jira/projects/FLINK/issues/FLINK-26296]
>  add document, These file should be translated.
> Files:
> docs/content.zh/docs/deployment/ha/overview.md
> docs/content.zh/docs/internals/job_scheduling.md



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24590) Consider removing timeout from FlinkMatchers#futureWillCompleteExceptionally

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24590:

Fix Version/s: 1.16.0

> Consider removing timeout from FlinkMatchers#futureWillCompleteExceptionally
> 
>
> Key: FLINK-24590
> URL: https://issues.apache.org/jira/browse/FLINK-24590
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Tests
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> We concluded to not use timeouts in tests, but certain utility methods still 
> ask for a timeout argument.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23694) KafkaWriterITCase hangs on azure

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23694:

Fix Version/s: 1.16.0

> KafkaWriterITCase hangs on azure
> 
>
> Key: FLINK-23694
> URL: https://issues.apache.org/jira/browse/FLINK-23694
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21761&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=15a22db7-8faa-5b34-3920-d33c9f0ca23c&l=7220



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14690) Support 'DESCRIBE CATALOG catalogName' statement in TableEnvironment and SQL Client

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-14690:

Fix Version/s: 1.16.0

> Support 'DESCRIBE CATALOG catalogName' statement in TableEnvironment and SQL 
> Client
> ---
>
> Key: FLINK-14690
> URL: https://issues.apache.org/jira/browse/FLINK-14690
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Client
>Reporter: Terry Wang
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> 1. showCatalogsStatement. (has been supported)
> SHOW CATALOGS
> 2. describeCatalogStatement
> DESCRIBE CATALOG catalogName
> See 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP+69+-+Flink+SQL+DDL+Enhancement



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-17372) SlotManager should expose total required resources

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-17372:

Fix Version/s: 1.16.0

> SlotManager should expose total required resources
> --
>
> Key: FLINK-17372
> URL: https://issues.apache.org/jira/browse/FLINK-17372
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Till Rohrmann
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> Currently, the {{SlotManager}} exposes the set of required resources which 
> have not been fulfilled via {{SlotManager.getRequiredResources}}. The idea of 
> this function is to allow the {{ResourceManager}} to decide whether new 
> pods/containers need to be started or not.
> The problem is that once a resource has been registered at the 
> {{SlotManager}} it will decrease the set of required resources. If now a 
> pod/container fails, then the {{ResourceManager}} won't know whether it needs 
> to restart the container or not.
> In order to simplify the interaction, I propose to let the {{SlotManager}} 
> announce all of its required resources (pending + registered resources). That 
> way the {{ResourceManager}} only needs to compare the set of required 
> resources with the set of pending and allocated containers/pods.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18578) Add rejecting checkpoint logic in source

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-18578:

Fix Version/s: 1.16.0

> Add rejecting checkpoint logic in source
> 
>
> Key: FLINK-18578
> URL: https://issues.apache.org/jira/browse/FLINK-18578
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common, Runtime / Checkpointing
>Reporter: Qingsheng Ren
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
> Fix For: 1.15.0, 1.16.0
>
>
> Under some database's change data capture (CDC) case, the process is usually 
> divided into two phases: snapshotting phase (lock captured tables and scan 
> all records in them) and log streaming phase (read all changes starting from 
> the moment of locking tables) in order to build a complete view of captured 
> tables. The first snapshotting phase should be atomic so we have to give up 
> all records created in snapshotting phase if any failure happen, because 
> contents in captured tables might have changed during recovery. And 
> checkpointing within snapshotting phase is meaningless too.
> As a result, we need to add a new feature in the source to reject checkpoint 
> if the source is currently within an atomic operation or some other processes 
> that cannot do a checkpoint currently. This rejection should not be treated 
> as a failure that could lead to failure of the entire job. 
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20787) Improve the Table API to solve user friction

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-20787:

Fix Version/s: 1.16.0

> Improve the Table API to solve user friction
> 
>
> Key: FLINK-20787
> URL: https://issues.apache.org/jira/browse/FLINK-20787
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Dian Fu
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> Currently, there are bugs or missing features in the Table API which causes 
> friction for users. This is an umbrella JIRA for all the issues specific in 
> the Table API and trying to make the Table API smooth to use.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20474) flink cep results of doc is not right

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-20474:

Fix Version/s: 1.16.0

> flink cep results of doc is not right
> -
>
> Key: FLINK-20474
> URL: https://issues.apache.org/jira/browse/FLINK-20474
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.12.0
>Reporter: jackylau
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> h4. Contiguity within looping patterns
> You can apply the same contiguity condition as discussed in the previous 
> [section|https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/libs/cep.html#combining-patterns]
>  within a looping pattern. The contiguity will be applied between elements 
> accepted into such a pattern. To illustrate the above with an example, a 
> pattern sequence {{"a b+ c"}} ({{"a"}} followed by any(non-deterministic 
> relaxed) sequence of one or more {{"b"}}’s followed by a {{"c"}}) with input 
> {{"a", "b1", "d1", "b2", "d2", "b3" "c"}} will have the following results:
>  # *Strict Contiguity*: {{{a b3 c}}} – the {{"d1"}} after {{"b1"}} causes 
> {{"b1"}} to be discarded, the same happens for {{"b2"}} because of {{"d2"}}.
>  # *Relaxed Contiguity*: {{{a b1 c}}}, {{{a b1 b2 c}}}, {{{a b1 b2 b3 c}}}, 
> {{{a b2 c}}}, {{{a b2 b3 c}}}, {{{a b3 c}}} - {{"d"}}’s are ignored.
>  # *Non-Deterministic Relaxed Contiguity*: {{{a b1 c}}}, {{{a b1 b2 c}}}, 
> {{{a b1 b3 c}}}, {{{a b1 b2 b3 c}}}, {{{a b2 c}}}, {{{a b2 b3 c}}}, {{{a b3 
> c}}} - notice the {{{a b1 b3 c}}}, which is the result of relaxing contiguity 
> between {{"b"}}’s.
>  
>  {{"a b+ c"}} ({{"a"}} followed by any(non-deterministic relaxed) sequence of 
> one or more {{"b"}}’s followed by a {{"c"}}) is not correct at *followed by a 
> {{"c". it is nexted by a "c"}}*



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-15378) StreamFileSystemSink supported mutil hdfs plugins.

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-15378:

Fix Version/s: 1.16.0

> StreamFileSystemSink supported mutil hdfs plugins.
> --
>
> Key: FLINK-15378
> URL: https://issues.apache.org/jira/browse/FLINK-15378
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, FileSystems
>Affects Versions: 1.9.2, 1.10.0
>Reporter: ouyangwulin
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.15.0, 1.16.0
>
> Attachments: jobmananger.log
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [As report from 
> maillist|[https://lists.apache.org/thread.html/7a6b1e341bde0ef632a82f8d46c9c93da358244b6bac0d8d544d11cb%40%3Cuser.flink.apache.org%3E]]
> Request 1:  FileSystem plugins not effect the default yarn dependecies.
> Request 2:  StreamFileSystemSink supported mutil hdfs plugins under the same 
> schema
> As Problem describe :
>     when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
> clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
> implements '*FileSystemFactory*', when jm start, It will call 
> FileSystem.initialize(configuration, 
> PluginUtils.createPluginManagerFromRootFolder(configuration)) to load 
> factories to map  FileSystem#**{color}FS_FACTORIES, and the key is only 
> schema. When tm/jm use local hadoop conf A ,   the user code use hadoop conf 
> Bin 'filesystem plugin',  Conf A and Conf B is used to different hadoop 
> cluster. and The Jm will start failed, beacuse of the blodserver in JM will 
> load Conf B to get filesystem. the full log add appendix.
>  
> AS reslove method:
>     use  schema and spec identify as key for ' FileSystem#**FS_FACTORIES '
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18213) refactor kafka sql connector to use just one shade to compatible 0.10.0.2 +

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-18213:

Fix Version/s: 1.16.0

> refactor kafka sql connector to use just one shade to compatible 0.10.0.2 +
> ---
>
> Key: FLINK-18213
> URL: https://issues.apache.org/jira/browse/FLINK-18213
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.11.0
>Reporter: jackylau
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> Flink master supports 0.10/0.11/2.x, with three flink-sql-connector shade jar 
> currently (1.12-snapshot).
> As we all know ,kafka client is compatible after 0.10.2.x, so we can use 
> kafka client 2.x to access to brocker server are  0.10/0.11/2.x. 
> So we can just use one kafka sql shade jar. 
> for this , we should do 2 things
> 1) refactor to 1 shade jar
> 2) rename flink-kafka-connector mudules with same qualified name in case of 
> conflicts such as NoSuchMethod or ClassNotFound error



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20572) HiveCatalog should be a standalone module

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-20572:

Fix Version/s: 1.16.0

> HiveCatalog should be a standalone module
> -
>
> Key: FLINK-20572
> URL: https://issues.apache.org/jira/browse/FLINK-20572
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> Currently HiveCatalog is the only implementation that supports persistent 
> metadata. It's possible that users just want to use HiveCatalog to manage 
> metadata, and doesn't intend to read/write Hive tables. However HiveCatalog 
> is part of Hive connector which requires lots of Hive dependencies, and 
> introducing these dependencies increases the chance of lib conflicts. We 
> should investigate whether we can move HiveCatalog to a light-weight 
> standalone module.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25316) BlobServer can get stuck during shutdown

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25316:

Fix Version/s: 1.16.0

> BlobServer can get stuck during shutdown
> 
>
> Key: FLINK-25316
> URL: https://issues.apache.org/jira/browse/FLINK-25316
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Robert Metzger
>Priority: Minor
> Fix For: 1.15.0, 1.16.0
>
>
> The cluster shutdown can get stuck
> {code}
> "AkkaRpcService-Supervisor-Termination-Future-Executor-thread-1" #89 daemon 
> prio=5 os_prio=0 tid=0x004017d7 nid=0x2ec in Object.wait() 
> [0x00402a9b5000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xd6c48368> (a 
> org.apache.flink.runtime.blob.BlobServer)
>   at java.lang.Thread.join(Thread.java:1252)
>   - locked <0xd6c48368> (a 
> org.apache.flink.runtime.blob.BlobServer)
>   at java.lang.Thread.join(Thread.java:1326)
>   at org.apache.flink.runtime.blob.BlobServer.close(BlobServer.java:319)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.stopClusterServices(ClusterEntrypoint.java:406)
>   - locked <0xd5d27350> (a java.lang.Object)
>   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$shutDownAsync$4(ClusterEntrypoint.java:505
> {code}
> because the BlobServer.run() method ignores interrupts:
> {code}
> "BLOB Server listener at 6124" #30 daemon prio=5 os_prio=0 
> tid=0x00401c929800 nid=0x2b4 runnable [0x0040263f9000]
>java.lang.Thread.State: RUNNABLE
>   at java.net.PlainSocketImpl.socketAccept(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
>   at java.net.ServerSocket.implAccept(ServerSocket.java:560)
>   at java.net.ServerSocket.accept(ServerSocket.java:528)
>   at 
> org.apache.flink.util.NetUtils.acceptWithoutTimeout(NetUtils.java:143)
>   at org.apache.flink.runtime.blob.BlobServer.run(BlobServer.java:268)
> {code}
> This issue was introduced in FLINK-24156 and first mentioned in 
> https://issues.apache.org/jira/browse/FLINK-24113?focusedCommentId=17459414&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17459414



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22223) Explicitly configure rest.address in distribution config

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-3:

Fix Version/s: 1.16.0

> Explicitly configure rest.address in distribution config
> 
>
> Key: FLINK-3
> URL: https://issues.apache.org/jira/browse/FLINK-3
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> We should finally update the default config in the distribution:
> {code}
> Configuration [] - Config uses fallback configuration key 
> 'jobmanager.rpc.address' instead of key 'rest.address'
> {code}
> As part of this task we also need to go over all occurrences of 
> {{jobmanager.rpc.address}} in the documentation and determine whether this is 
> about internal or external connectivity.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22299) should remind users to ignoreParseErrors option when parser failed

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22299:

Fix Version/s: 1.16.0

> should remind users to ignoreParseErrors option when parser failed
> --
>
> Key: FLINK-22299
> URL: https://issues.apache.org/jira/browse/FLINK-22299
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Reporter: Jingsong Lee
>Priority: Minor
> Fix For: 1.15.0, 1.16.0
>
>
> For JSON and CSV, we provide ignoreParseErrors option, but user may not aware 
> of it.
> We can remind users when parser failed.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24483) Document what is Public API and what compatibility guarantees Flink is providing

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24483:

Fix Version/s: 1.16.0

> Document what is Public API and what compatibility guarantees Flink is 
> providing
> 
>
> Key: FLINK-24483
> URL: https://issues.apache.org/jira/browse/FLINK-24483
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Documentation, Table SQL / API
>Affects Versions: 1.14.0, 1.12.5, 1.13.2
>Reporter: Piotr Nowojski
>Priority: Major
> Fix For: 1.15.0, 1.16.0, 1.13.7, 1.14.5
>
>
> We should document:
> * What constitute of the Public API, what do 
> Public/PublicEvolving/Experimental/Internal annotations mean.
> * What compatibility guarantees we are providing forward (backward?) 
> functional/compile/binary compatibility for {{@Public}} interfaces?
> A good starting point: 
> https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=44302796#content/view/62686683



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21898) JobRetrievalITCase crash

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-21898:

Fix Version/s: 1.16.0

> JobRetrievalITCase crash
> 
>
> Key: FLINK-21898
> URL: https://issues.apache.org/jira/browse/FLINK-21898
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15083&view=logs&j=8fd9202e-fd17-5b26-353c-ac1ff76c8f28&t=a0a633b8-47ef-5c5a-2806-3c13b9e48228&l=4383



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20305) Iterative job with maxWaitTimeMillis does not terminate

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-20305:

Fix Version/s: 1.16.0

> Iterative job with maxWaitTimeMillis does not terminate
> ---
>
> Key: FLINK-20305
> URL: https://issues.apache.org/jira/browse/FLINK-20305
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.12.0
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: auto-deprioritized-major, usability
> Fix For: 1.15.0, 1.16.0
>
>
> While testing the {{DataStream}} API, I noticed that an iterative job with 
> {{maxWaitTimeMillis}} set, sometimes did not terminate depending on the used 
> parallelism. Since I used a bounded source with few events, it looked a bit 
> as if the iteration tasks couldn't terminate if they didn't receive an event. 
> The corresponding testing job including test data can be found 
> [here|https://github.com/tillrohrmann/flink-streaming-batch-execution/tree/iteration-problem].



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23390) FlinkKafkaInternalProducerITCase.testResumeTransactionAfterClosed

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23390:

Fix Version/s: 1.16.0

> FlinkKafkaInternalProducerITCase.testResumeTransactionAfterClosed
> -
>
> Key: FLINK-23390
> URL: https://issues.apache.org/jira/browse/FLINK-23390
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Assignee: Fabian Paul
>Priority: Major
>  Labels: stale-assigned, test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20454&view=logs&j=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f&t=f266c805-9429-58ed-2f9e-482e7b82f58b&l=6914
> {code}
> Jul 14 22:01:05 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 49 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase
> Jul 14 22:01:05 [ERROR] 
> testResumeTransactionAfterClosed(org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase)
>   Time elapsed: 5.271 s  <<< ERROR!
> Jul 14 22:01:05 java.lang.Exception: Unexpected exception, 
> expected but was
> Jul 14 22:01:05   at 
> org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:30)
> Jul 14 22:01:05   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Jul 14 22:01:05   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Jul 14 22:01:05   at 
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> Jul 14 22:01:05   at java.base/java.lang.Thread.run(Thread.java:834)
> Jul 14 22:01:05 Caused by: java.lang.AssertionError: The message should have 
> been successfully sent expected null, but 
> was:
> Jul 14 22:01:05   at org.junit.Assert.fail(Assert.java:89)
> Jul 14 22:01:05   at org.junit.Assert.failNotNull(Assert.java:756)
> Jul 14 22:01:05   at org.junit.Assert.assertNull(Assert.java:738)
> Jul 14 22:01:05   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.getClosedProducer(FlinkKafkaInternalProducerITCase.java:228)
> Jul 14 22:01:05   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.testResumeTransactionAfterClosed(FlinkKafkaInternalProducerITCase.java:184)
> Jul 14 22:01:05   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Jul 14 22:01:05   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 14 22:01:05   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 14 22:01:05   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> Jul 14 22:01:05   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jul 14 22:01:05   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 14 22:01:05   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jul 14 22:01:05   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 14 22:01:05   at 
> org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:19)
> Jul 14 22:01:05   ... 4 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21940) Rowtime/proctime should be obtained from getTimestamp instead of getLong

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-21940:

Fix Version/s: 1.16.0

> Rowtime/proctime should be obtained from getTimestamp instead of getLong
> 
>
> Key: FLINK-21940
> URL: https://issues.apache.org/jira/browse/FLINK-21940
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21179) Make sure that the open/close methods of the Python DataStream Function are not implemented when using in ReducingState and AggregatingState

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-21179:

Fix Version/s: 1.16.0

> Make sure that the open/close methods of the Python DataStream Function are 
> not implemented when using in ReducingState and AggregatingState
> 
>
> Key: FLINK-21179
> URL: https://issues.apache.org/jira/browse/FLINK-21179
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Wei Zhong
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> As the ReducingState and AggregatingState only support non-rich functions, we 
> need to make sure that the open/close methods of the Python DataStream 
> Function are not implemented when using in ReducingState and AggregatingState.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-16024) support filter pushdown in jdbc connector

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-16024:

Fix Version/s: 1.16.0

> support filter pushdown in jdbc connector
> -
>
> Key: FLINK-16024
> URL: https://issues.apache.org/jira/browse/FLINK-16024
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20190) A New Window Trigger that can trigger window operation both by event time interval、event count for DataStream API

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-20190:

Fix Version/s: 1.16.0

> A New Window Trigger that can trigger window operation both by event time 
> interval、event count for DataStream API
> -
>
> Key: FLINK-20190
> URL: https://issues.apache.org/jira/browse/FLINK-20190
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Reporter: GaryGao
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> In production environment, when we are do some window operation, such as 
> window aggregation, using data stream api, developers are always asked to not 
> only trigger the window operation when the watermark pass the max timestamp 
> of window, but also trigger it both by fixed event time interval and fixed 
> count of event.The reason why we want to do this is we are looking forward to 
> get the frequently updated window operation result, other than waiting for a 
> long time until the watermark pass the max timestamp of window.This is very 
> useful in reporting and other BI applications.
> For now the default triggers provided by flink can not close this 
> requirement, so I developed a New Trigger, so called 
> CountAndContinuousEventTimeTrigger, combine ContinuousEventTimeTrigger with 
> CountTrigger to do the above thing.
>  
> To use CountAndContinuousEventTimeTrigger, you should specify two parameters 
> as revealed in it constructor:
> {code:java}
> private CountAndContinuousEventTimeTrigger(Time interval, long 
> maxCount);{code}
>  * Time interval, it means this trigger will continuously fires based on a 
> given time interval, the same as ContinuousEventTimeTrigger.
>  * long maxCount, it means this trigger will fires once the count of elements 
> in a pane reaches the given count, the same as CountTrigger. 
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20896) Support SupportsAggregatePushDown for JDBC TableSource

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-20896:

Fix Version/s: 1.16.0

> Support SupportsAggregatePushDown for JDBC TableSource
> --
>
> Key: FLINK-20896
> URL: https://issues.apache.org/jira/browse/FLINK-20896
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Reporter: Sebastian Liu
>Priority: Major
>  Labels: auto-unassigned
> Fix For: 1.15.0, 1.16.0
>
>
> Will add SupportsAggregatePushDown implementation for JDBC TableSource.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23740) SQL Full Outer Join bug

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23740:

Fix Version/s: 1.16.0

> SQL Full Outer Join bug
> ---
>
> Key: FLINK-23740
> URL: https://issues.apache.org/jira/browse/FLINK-23740
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.13.1, 1.13.2
>Reporter: Fu Kai
>Priority: Critical
> Fix For: 1.15.0, 1.16.0
>
>
> Hi team,
> We encountered an issue about FULL OUTER JOIN of Flink SQL, which happens 
> occasionally at very low probability that join output records cannot be 
> correctly updated. We cannot locate the root cause for now by glancing at the 
> SQL join logic in 
> [StreamingJoinOperator.|https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/StreamingJoinOperator.java#L198]
>  It cannot be stably reproduced and it does happen with massive data volume.
> The reason we suspect it's the FULL OUER join problem instead of others like 
> LEFT OUTER join is because the issue only arises after we introduced FULL 
> OUTER into the join flow. The query we are using is like the following. The 
> are two join code pieces below, the fist one contains solely left join(though 
> with nested) and there is no issue detected; the second one contains both 
> left and full outer join(nested as well), and the problem is that sometimes 
> update from the left table A(and other tables before the full outer join 
> operator) cannot be reflected in the final output. We suspect it could be the 
> introduce of full outer join that caused the problem, although at a very low 
> probability(~10 out of ~30million). 
> The root cause of the bug could be something else, the suspecting of FULL OUT 
> join is based on the result of our current experiment and observation.
> {code:java}
> create table A(
> k1 int,
> k2 int,
> k3 int,
> k4 int,
> k5 int,
> PRIMARY KEY (k1, k2, k3, k4, k5) NOT ENFORCED
> ) WITH ();
> create table B(
> k1 int,
> k2 int,
> k3 int,
> PRIMARY KEY (k1, k2, k3) NOT ENFORCED
> ) WITH ();
> create table C(
> k1 int,
> k2 int,
> k3 int,
> PRIMARY KEY (k1, k2, k3) NOT ENFORCED
> ) WITH ();
> create table D(
> k1 int,
> k2 int,
> PRIMARY KEY (k1, k2) NOT ENFORCED
> ) WITH ();
> // query with left join, no issue detected
> select * from A 
> left outer join 
> (select * from B
> left outer join C
> on 
> B.k1 = C.k1
> B.k2 = C.k2
> B.k3 = C.k3
> ) as BC
> on
> A.k1 = BC.k1
> A.k2 = BC.k2
> A.k3 = BC.k3
> left outer join D
> on 
> A.k1 = D.k1
> A.k2 = D.k2
> ;
> // query with full outer join combined with left outer join, record updates 
> from left table A cannot be updated in the final output record some times
> select * from A 
> left outer join 
> (select * from B
> full outer join C
> on 
> B.k1 = C.k1
> B.k2 = C.k2
> B.k3 = C.k3
> ) as BC
> on
> A.k1 = BC.k1
> A.k2 = BC.k2
> A.k3 = BC.k3
> left outer join D
> on 
> A.k1 = D.k1
> A.k2 = D.k2
> ;
> {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-19499) Expose Metric Groups to Split Assigners

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-19499:

Fix Version/s: 1.16.0

> Expose Metric Groups to Split Assigners
> ---
>
> Key: FLINK-19499
> URL: https://issues.apache.org/jira/browse/FLINK-19499
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Stephan Ewen
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> Split Assigners should have access to metric groups, so they can report 
> metrics on assignment, like pending splits, local-, and remote assignments.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24095) Elasticsearch7DynamicSinkITCase.testWritingDocuments fails due to socket timeout

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24095:

Fix Version/s: 1.16.0

> Elasticsearch7DynamicSinkITCase.testWritingDocuments fails due to socket 
> timeout
> 
>
> Key: FLINK-24095
> URL: https://issues.apache.org/jira/browse/FLINK-24095
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23250&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d&l=12781
> {code}
> Aug 31 23:06:22 Caused by: java.net.SocketTimeoutException: 30,000 
> milliseconds timeout on connection http-outgoing-3 [ACTIVE]
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:808)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestClient.performRequest(RestClient.java:248)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1514)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1499)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestHighLevelClient.ping(RestHighLevelClient.java:720)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge.verifyClientConnection(Elasticsearch7ApiCallBridge.java:138)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge.verifyClientConnection(Elasticsearch7ApiCallBridge.java:46)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:318)
> Aug 31 23:06:22   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:100)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:46)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:110)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:691)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:667)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:639)
> Aug 31 23:06:22   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
> Aug 31 23:06:22   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
> Aug 31 23:06:22   at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
> Aug 31 23:06:22   at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
> Aug 31 23:06:22   at java.lang.Thread.run(Thread.java:748)
> Aug 31 23:06:22 Caused by: java.net.SocketTimeoutException: 30,000 
> milliseconds timeout on connection http-outgoing-3 [ACTIVE]
> Aug 31 23:06:22   at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.timeout(HttpAsyncRequestExecutor.java:387)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:92)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:39)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.AbstractIODispatch.timeout(AbstractIODispatch.java:175)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.sessionTimedOut(BaseIOReactor.java:261)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.timeoutCheck(AbstractIOReactor.java:502)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.validate(BaseIOReactor.java:211)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:280)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.AbstractMultiwork

[jira] [Updated] (FLINK-20726) Introduce Pulsar connector

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-20726:

Fix Version/s: 1.16.0

> Introduce Pulsar connector
> --
>
> Key: FLINK-20726
> URL: https://issues.apache.org/jira/browse/FLINK-20726
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Affects Versions: 1.13.0
>Reporter: Jianyun Zhao
>Priority: Major
>  Labels: auto-unassigned
> Fix For: 1.15.0, 1.16.0
>
>
> Pulsar is an important player in messaging middleware, and it is essential 
> for Flink to support Pulsar.
> Our existing code is maintained at 
> [streamnative/pulsar-flink|https://github.com/streamnative/pulsar-flink] , 
> next we will split it into several pr merges back to the community.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-16548) Expose consistent environment variable to identify the component name and resource id of jm/tm

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-16548:

Fix Version/s: 1.16.0

> Expose consistent environment variable to identify the component name and 
> resource id of jm/tm
> --
>
> Key: FLINK-16548
> URL: https://issues.apache.org/jira/browse/FLINK-16548
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: hejianchao
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> We proposed to expose environment variable to identify the component name and 
> resource id of jm/tm.
> To be specified:
> - Expose {{FLINK_COMPONENT_NAME}}. For jm, it should be "jobmanager". For tm, 
> it should be "taskexecutor".
> - Expose {{FLINK_COMPONENT_ID}}. For jm/tm, it should be the resource id.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26931) Pulsar sink's producer name should be unique

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26931:

Fix Version/s: 1.16.0

> Pulsar sink's producer name should be unique
> 
>
> Key: FLINK-26931
> URL: https://issues.apache.org/jira/browse/FLINK-26931
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Yufan Sheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> Pulsar's new sink interface didn't make the producer name unique. Which would 
> make the pulsar fail to consume messages.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-15183) Use SQL-CLI to TPC-DS E2E test

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-15183:

Fix Version/s: 1.16.0

> Use SQL-CLI to TPC-DS E2E test
> --
>
> Key: FLINK-15183
> URL: https://issues.apache.org/jira/browse/FLINK-15183
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Tests
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> Now SQL-CLI support DDL, we can use SQL-CLI to test tpc-ds.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25040) FlinkKafkaInternalProducerITCase.testInitTransactionId failed on AZP

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25040:

Fix Version/s: 1.16.0

> FlinkKafkaInternalProducerITCase.testInitTransactionId failed on AZP
> 
>
> Key: FLINK-25040
> URL: https://issues.apache.org/jira/browse/FLINK-25040
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Assignee: Fabian Paul
>Priority: Major
>  Labels: stale-assigned, test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> The test {{FlinkKafkaInternalProducerITCase.testInitTransactionId}} failed on 
> AZP with:
> {code}
> Nov 24 09:25:41 [ERROR] 
> org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducerITCase.testInitTransactionId
>   Time elapsed: 82.766 s  <<< ERROR!
> Nov 24 09:25:41 org.apache.kafka.common.errors.TimeoutException: Timeout 
> expired after 6 milliseconds while awaiting InitProducerId
> Nov 24 09:25:41 
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=26987&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=15a22db7-8faa-5b34-3920-d33c9f0ca23c&l=6726



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25353) Add tests to ensure API graduation process is honoured

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25353:

Fix Version/s: 1.16.0

> Add tests to ensure API graduation process is honoured
> --
>
> Key: FLINK-25353
> URL: https://issues.apache.org/jira/browse/FLINK-25353
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> We need a test that ensures that the API graduation process is honoured. 
> Concretely, this means that the test should fail if it sees an annotated API 
> that has its API stability since more than 2 minor release cycles w/o having 
> a reason for the missed graduation, then the test should fail.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20767) add nested field support for SupportsFilterPushDown

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-20767:

Fix Version/s: 1.16.0

> add nested field support for SupportsFilterPushDown
> ---
>
> Key: FLINK-20767
> URL: https://issues.apache.org/jira/browse/FLINK-20767
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0
>Reporter: Jun Zhang
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> I think we should add the nested field support for SupportsFilterPushDown



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24636) Move cluster deletion operation cache into ResourceManager

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24636:

Fix Version/s: 1.16.0

> Move cluster deletion operation cache into ResourceManager
> --
>
> Key: FLINK-24636
> URL: https://issues.apache.org/jira/browse/FLINK-24636
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Runtime / REST
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14340) Specify an unique DFSClient name for Hadoop FileSystem

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-14340:

Fix Version/s: 1.16.0

> Specify an unique DFSClient name for Hadoop FileSystem
> --
>
> Key: FLINK-14340
> URL: https://issues.apache.org/jira/browse/FLINK-14340
> Project: Flink
>  Issue Type: Improvement
>  Components: FileSystems
>Reporter: Congxian Qiu
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> Currently, when Flink read/write to HDFS, we do not set the DFSClient name 
> for all the connections, so we can’t distinguish the connections, and can’t 
> find the specific Job or TM quickly.
> This issue wants to add the {{container_id}} as a unique name when init 
> Hadoop File System, so we can easily distinguish the connections belongs to 
> which Job/TM.
>  
> Core changes is add a line such as below in 
> {{org.apache.flink.runtime.fs.hdfs.HadoopFsFactory#create}}
>  
> {code:java}
> hadoopConfig.set(“mapreduce.task.attempt.id”, 
> System.getenv().getOrDefault(CONTAINER_KEY_IN_ENV, 
> DEFAULT_CONTAINER_ID));{code}
>  
> Currently, In {{YarnResourceManager}} and {{MesosResourceManager}} we both 
> have an enviroment key {{ENV_FLINK_CONTAINER_ID = "_FLINK_CONTAINER_ID"}}, so 
> maybe we should introduce this key in {{StandaloneResourceManager}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22237) Flamegraph component calls draw method from template

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22237:

Fix Version/s: 1.16.0

> Flamegraph component calls draw method from template
> 
>
> Key: FLINK-22237
> URL: https://issues.apache.org/jira/browse/FLINK-22237
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> [One 
> comment|https://github.com/apache/flink/pull/15054#discussion_r603726822] 
> from the FLINK-13550 PR was not addressed because we couldn't figure out on 
> our own how to implement it properly (it actually broke the UI).
> We should follow up on that.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26446) Update Feature Radar in Apache Flink Roadmap

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26446:

Fix Version/s: 1.16.0

> Update Feature Radar in Apache Flink Roadmap 
> -
>
> Key: FLINK-26446
> URL: https://issues.apache.org/jira/browse/FLINK-26446
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Konstantin Knauf
>Priority: Critical
> Fix For: 1.15.0, 1.16.0
>
> Attachments: flink_feature_radar_3.png
>
>
> Deployment/Coordination:
>  * Java 8 -> Deprecation
>  * Add Deployment Modes
>  ** Application Mode -> Stable
>  ** Session Mode -> Stable
>  ** Per-Job Mode -> Deprecated
>  * Adaptive Scheduler -> Ready & Evolving
> Connectors (removing all connectors from this list due to connectors being 
> externalized: let's only focus on the interfaces)
>  * Remove NiFi Source
>  * Remove Kafka, File [via Unified Sink API]
>  * HybridSource -> Ready & Evolving
>  * Remove Hive SQL Source & Sink
>  * Remove JDBC Sink
>  * Remove Kinesis Source & Sink
>  * Remove Kafka, File, Pulsar [via Unified Source API]
>  * Remove Change-Data-Capture API and Connectors
>  * Remove Rabbit MQ Source
>  * Remove PubSub Source & Sink
>  * Remove HBase SQL Source & Sink
>  * Remove Elastic Search Sink
>  * Remove Cassandra Sink
>  * Legacy File Source & Sink -> Deprecated
>  * Legacy Kafka Source & Sink -> Deprecated
> Resource Managers:
>  * Remove Scala Shell
> APIs:
>  * SourceFunction & SinkFunction -> Deprecated
>  * Add Unified Sink API -> Ready & Evolving
>  * Add Unified Source API -> Stable
>  * Add ASync API -> Beta
>  * Add Topology Sink -> Beta
>  * Add SQL Upgrade Support -> MVP
> Languages:
>  * Remove Scala 2.11
> Libraries:
>  * Gelly -> Deprecated
>  * State Processor API -> Ready & Evolving



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24306) group by index throw SqlValidatorException

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24306:

Fix Version/s: 1.16.0

> group by index throw SqlValidatorException
> --
>
> Key: FLINK-24306
> URL: https://issues.apache.org/jira/browse/FLINK-24306
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.2, 1.13.1
>Reporter: zlzhang0122
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
> Attachments: calcite.png, sql_exception.png
>
>
> We create a table using following DDL:
> {code:java}
> create table if not exists datagen_source ( 
> id int,
> name string,
> sex string,
> age int,
> birthday string,
> proc_time as proctime()
> ) with (
> 'connector' = 'datagen',
> 'rows-per-second' = '1',
> 'fields.id.kind' = 'random',
> 'fields.id.min' = '1',
> 'fields.id.max' = '200');{code}
> When we running 
> {code:java}
> select id, count(*) from datagen_source group by id;{code}
> Everything will be fine.But if we running 
> {code:java}
> select id, count(*) from datagen_source group by 1;{code}
> We will get a SqlValidatorException like this:
>  !sql_exception.png!
> Since MySql\Hive\Spark SQL\etc. all support group by index, I think Flink 
> also should support this syntax too.
>   



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22230) Misleading TaskExecutorResourceUtils log messages

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22230:

Fix Version/s: 1.16.0

> Misleading TaskExecutorResourceUtils log messages
> -
>
> Key: FLINK-22230
> URL: https://issues.apache.org/jira/browse/FLINK-22230
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> ??Stephan Ewen: These lines show up on any execution of a local job and make 
> me think I forgot to configure something I probably should have, wondering 
> whether this might cause problems later? These have been in Flink for a few 
> releases now, might be worth rephrasing, though.??
> {code}
> 2021-03-30 17:57:22,483 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
> configuration option taskmanager.cpu.cores required for local execution is 
> not set, setting it to the maximal possible value.
> 2021-03-30 17:57:22,483 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
> configuration option taskmanager.memory.task.heap.size required for local 
> execution is not set, setting it to the maximal possible value.
> 2021-03-30 17:57:22,483 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
> configuration option taskmanager.memory.task.off-heap.size required for local 
> execution is not set, setting it to the maximal possible value.
> 2021-03-30 17:57:22,483 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
> configuration option taskmanager.memory.network.min required for local 
> execution is not set, setting it to its default value 64 mb.
> 2021-03-30 17:57:22,483 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
> configuration option taskmanager.memory.network.max required for local 
> execution is not set, setting it to its default value 64 mb.
> 2021-03-30 17:57:22,483 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
> configuration option taskmanager.memory.managed.size required for local 
> execution is not set, setting it to its default value 128 mb.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23997) Improvement for SQL windowing table-valued function

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23997:

Fix Version/s: 1.16.0

> Improvement for SQL windowing table-valued function
> ---
>
> Key: FLINK-23997
> URL: https://issues.apache.org/jira/browse/FLINK-23997
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> This is an umbrella issue for follow up issues related with windowing 
> table-valued function.
> FLIP-145: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-145%3A+Support+SQL+windowing+table-valued+function#FLIP145:SupportSQLwindowingtablevaluedfunction-CumulatingWindows]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-12273) The default value of CheckpointRetentionPolicy should be RETAIN_ON_FAILURE, otherwise it cannot be recovered according to checkpoint after failure.

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-12273:

Fix Version/s: 1.16.0

> The default value of CheckpointRetentionPolicy should be RETAIN_ON_FAILURE, 
> otherwise it cannot be recovered according to checkpoint after failure.
> ---
>
> Key: FLINK-12273
> URL: https://issues.apache.org/jira/browse/FLINK-12273
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.6.3, 1.6.4, 1.7.2, 1.8.0
>Reporter: Mr.Nineteen
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The default value of CheckpointRetentionPolicy should be RETAIN_ON_FAILURE, 
> otherwise it cannot be recovered according to checkpoint after failure.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22336) [umbrella] Enhance testing for built-in functions

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22336:

Fix Version/s: 1.16.0

> [umbrella] Enhance testing for built-in functions
> -
>
> Key: FLINK-22336
> URL: https://issues.apache.org/jira/browse/FLINK-22336
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> We can complement the testing for existing functions, from the following 
> aspects:
>  * test all support types
>  * test normal value, random value, extreme value and illegal value of 
> parameters
>  * test null value of parameters
>  * test return null value



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-11499) Extend StreamingFileSink BulkFormats to support arbitrary roll policies

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-11499:

Fix Version/s: 1.16.0

> Extend StreamingFileSink BulkFormats to support arbitrary roll policies
> ---
>
> Key: FLINK-11499
> URL: https://issues.apache.org/jira/browse/FLINK-11499
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Seth Wiesman
>Priority: Minor
>  Labels: auto-deprioritized-major, usability
> Fix For: 1.15.0, 1.16.0
>
>
> Currently when using the StreamingFilleSink Bulk-encoding formats can only be 
> combined with the `OnCheckpointRollingPolicy`, which rolls the in-progress 
> part file on every checkpoint.
> However, many bulk formats such as parquet are most efficient when written as 
> large files; this is not possible when frequent checkpointing is enabled. 
> Currently the only work-around is to have long checkpoint intervals which is 
> not ideal.
>  
> The StreamingFileSink should be enhanced to support arbitrary roll policy's 
> so users may write large bulk files while retaining frequent checkpoints.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21408) Clarify which DataStream sources support Batch execution

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-21408:

Fix Version/s: 1.16.0

> Clarify which DataStream sources support Batch execution
> 
>
> Key: FLINK-21408
> URL: https://issues.apache.org/jira/browse/FLINK-21408
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Documentation
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> The DataStream "Execution Mode" documentation goes to great lengths to 
> describe the differences between the modes and impact on various aspects of 
> Flink like checkpointing.
> However the topic of connectors, and specifically which for Batch mode, or 
> whether there even are any that don't, is not mentioned at all.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-19482) OrcRowInputFormat does not define serialVersionUID

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-19482:

Fix Version/s: 1.16.0

> OrcRowInputFormat does not define serialVersionUID
> --
>
> Key: FLINK-19482
> URL: https://issues.apache.org/jira/browse/FLINK-19482
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> The org.apache.flink.orc function does not define a {{serialVersionUID}}.
> We should define a {{serialVersionUID when object is serialized to avoid 
> [java.io|http://java.io/].InvalidClassExceptio exception.}}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24140) Enhance the ITCase for FLIP-147

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24140:

Fix Version/s: 1.16.0

> Enhance the ITCase for FLIP-147
> ---
>
> Key: FLINK-24140
> URL: https://issues.apache.org/jira/browse/FLINK-24140
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Yun Gao
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> We might enhance the ITCase to cover more cases:
> 1. Sources using new source API (bounded, stop-with-savepoint --drain)
> 2. Multiple-inputs operator with source chained (bounded, stop-with-savepoint 
> --drain)
> 3. Source / sink using UnionListState (bounded, stop-with-savepoint --drain)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-17755) Support side-output of expiring states with TTL.

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-17755:

Fix Version/s: 1.16.0

> Support side-output of expiring states with TTL.
> 
>
> Key: FLINK-17755
> URL: https://issues.apache.org/jira/browse/FLINK-17755
> Project: Flink
>  Issue Type: New Feature
>  Components: API / State Processor
>Reporter: Roey Shem Tov
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 2.0.0, 1.15.0, 1.16.0
>
>
> When we set a StateTTLConfig to StateDescriptor, then when a record has been 
> expired, it is deleted from the StateBackend.
> I want suggest a new feature, that we can get the expiring results as side 
> output, to process them and not just delete them.
> For example, if we have a ListState that have a TTL enabled, we can get the 
> expiring records in the list as side-output.
> What do you think?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-23869) Correct javadoc of ExecutionConfig#registerKryoType

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-23869:

Fix Version/s: 1.16.0

> Correct javadoc of ExecutionConfig#registerKryoType
> ---
>
> Key: FLINK-23869
> URL: https://issues.apache.org/jira/browse/FLINK-23869
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, Documentation
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Minor
> Fix For: 1.15.0, 1.16.0
>
>
> Current javadoc of ExecutionConfig#registerKryoType is mistakely copied from 
> ExecutionConfig#registerPojoType, we could fix this error by copying from 
> documentation,



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21559) Python DataStreamTests::test_process_function failed on AZP

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-21559:

Fix Version/s: 1.16.0

> Python DataStreamTests::test_process_function failed on AZP
> ---
>
> Key: FLINK-21559
> URL: https://issues.apache.org/jira/browse/FLINK-21559
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> test-stability
> Fix For: 1.15.0, 1.16.0
>
>
> The Python test case {{DataStreamTests::test_process_function}} failed on AZP.
> {code}
> === short test summary info 
> 
> FAILED 
> pyflink/datastream/tests/test_data_stream.py::DataStreamTests::test_process_function
> = 1 failed, 705 passed, 22 skipped, 303 warnings in 583.39s (0:09:43) 
> ==
> ERROR: InvocationError for command /__w/3/s/flink-python/.tox/py38/bin/pytest 
> --durations=20 (exited with code 1)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13992&view=logs&j=9cada3cb-c1d3-5621-16da-0f718fb86602&t=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22960) Temporal join TTL should use enableTimeToLive of state instead of timer

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-22960:

Fix Version/s: 1.16.0

> Temporal join TTL should use enableTimeToLive of state instead of timer
> ---
>
> Key: FLINK-22960
> URL: https://issues.apache.org/jira/browse/FLINK-22960
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24542) Expose the freshness metrics for kafka connector

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24542:

Fix Version/s: 1.16.0

> Expose the freshness metrics for kafka connector
> 
>
> Key: FLINK-24542
> URL: https://issues.apache.org/jira/browse/FLINK-24542
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.12.2, 1.14.0, 1.13.1
>Reporter: zlzhang0122
>Priority: Major
> Fix For: 1.15.0, 1.16.0
>
>
> When we start a flink job to consume apache kafka, we usually use offsetLag, 
> which can be calulated by current-offsets minus committed-offsets, but 
> sometimes the offsetLag is hard to understand, we can hardly to judge wether 
> the value is normal or not. Kafka have proposed a new metric: freshness(see 
> [a-guide-to-kafka-consumer-freshness|https://www.jesseyates.com/2019/11/04/kafka-consumer-freshness-a-guide.html?trk=article_share_wechat&from=timeline&isappinstalled=0]).
> So we can also expose the freshness metric for kafka connector to improve the 
> user experience.From this freshness metric, user can easily know wether the 
> kafka message is backlog and need to deal with it.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-17337) Send UPDATE messages instead of INSERT and DELETE in streaming join operator

2022-04-12 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-17337:

Fix Version/s: 1.16.0

> Send UPDATE messages instead of INSERT and DELETE in streaming join operator
> 
>
> Key: FLINK-17337
> URL: https://issues.apache.org/jira/browse/FLINK-17337
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Jark Wu
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.15.0, 1.16.0
>
>
> Currently, streaming join operator always send INSERT and DELETE messages for 
> simplification if it's not inner join. However, we can send UPDATE_BEFORE and 
> UPDATE_AFTER messages instead of INSERT and DELETE. For example, when we 
> recieve right record "b", then we can send {{UB[a, null]}} and {{UA[a,b]}} 
> instead of {{D[a,null]}}, {{I[a,b]}}. This is an optimization, because UB can 
> be omitted in some cases to reduce IO cost and computation. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


<    2   3   4   5   6   7   8   9   10   11   >