[jira] [Updated] (FLINK-22796) Update mem_setup_tm documentation

2021-05-31 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22796:
-
Fix Version/s: 1.13.2
   1.14.0

> Update mem_setup_tm documentation
> -
>
> Key: FLINK-22796
> URL: https://issues.apache.org/jira/browse/FLINK-22796
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.13.1
>Reporter: Wei-Che Wei
>Assignee: Wei-Che Wei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.13.2
>
>
> In [FLINK-20860], there are two config options introduced.
> we should update the corresponding docs as well.
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/memory/mem_setup_tm/#managed-memory
> https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/deployment/memory/mem_setup_tm/#%e6%b6%88%e8%b4%b9%e8%80%85%e6%9d%83%e9%87%8d



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22796) Update mem_setup_tm documentation

2021-05-31 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22796:
-
Priority: Major  (was: Minor)

> Update mem_setup_tm documentation
> -
>
> Key: FLINK-22796
> URL: https://issues.apache.org/jira/browse/FLINK-22796
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Wei-Che Wei
>Assignee: Wei-Che Wei
>Priority: Major
>  Labels: pull-request-available
>
> In [FLINK-20860], there are two config options introduced.
> we should update the corresponding docs as well.
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/memory/mem_setup_tm/#managed-memory
> https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/deployment/memory/mem_setup_tm/#%e6%b6%88%e8%b4%b9%e8%80%85%e6%9d%83%e9%87%8d



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22796) Update mem_setup_tm documentation

2021-05-31 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22796:
-
Affects Version/s: 1.13.1

> Update mem_setup_tm documentation
> -
>
> Key: FLINK-22796
> URL: https://issues.apache.org/jira/browse/FLINK-22796
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.13.1
>Reporter: Wei-Che Wei
>Assignee: Wei-Che Wei
>Priority: Major
>  Labels: pull-request-available
>
> In [FLINK-20860], there are two config options introduced.
> we should update the corresponding docs as well.
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/memory/mem_setup_tm/#managed-memory
> https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/deployment/memory/mem_setup_tm/#%e6%b6%88%e8%b4%b9%e8%80%85%e6%9d%83%e9%87%8d



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22796) Update mem_setup_tm documentation

2021-05-31 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22796:
-
Parent: (was: FLINK-19604)
Issue Type: Bug  (was: Sub-task)

> Update mem_setup_tm documentation
> -
>
> Key: FLINK-22796
> URL: https://issues.apache.org/jira/browse/FLINK-22796
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Wei-Che Wei
>Assignee: Wei-Che Wei
>Priority: Minor
>  Labels: pull-request-available
>
> In [FLINK-20860], there are two config options introduced.
> we should update the corresponding docs as well.
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/memory/mem_setup_tm/#managed-memory
> https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/deployment/memory/mem_setup_tm/#%e6%b6%88%e8%b4%b9%e8%80%85%e6%9d%83%e9%87%8d



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong closed pull request #16016: [FLINK-22796] Update mem_setup_tm documentation

2021-05-31 Thread GitBox


xintongsong closed pull request #16016:
URL: https://github.com/apache/flink/pull/16016


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15885: [FLINK-22376][runtime] RecoveredChannelStateHandler recycles the buffer if it was created inside and doesn't recycle if it was passed

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #15885:
URL: https://github.com/apache/flink/pull/15885#issuecomment-837001445


   
   ## CI report:
   
   * 4c1a9dbb3034eb0ac880db84954b2c40fed365c2 UNKNOWN
   * fc4cedd8a305e6cb1947d3ecdbced6c879b536e3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18463)
 
   * b716a5d8e2c3ac4b005b46dc6f1ca57a79e5f90e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22737) Add support for CURRENT_WATERMARK to SQL

2021-05-31 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-22737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354813#comment-17354813
 ] 

Ingo Bürk commented on FLINK-22737:
---

Just for completeness, this is how the SQL client will render MIN_VALUE. I 
can't say I'm a fan of it, but if we allow a date 300 million years in the 
past, this will have to be fixed. :) 

!screenshot-2021-05-31_14-22-43.png!

> Add support for CURRENT_WATERMARK to SQL
> 
>
> Key: FLINK-22737
> URL: https://issues.apache.org/jira/browse/FLINK-22737
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: David Anderson
>Assignee: Ingo Bürk
>Priority: Major
> Attachments: screenshot-2021-05-31_14-22-43.png
>
>
> With a built-in function returning the current watermark, one could operate 
> on late events without resorting to using the DataStream API.
> Called with zero parameters, this function returns the current watermark for 
> the current row – if there is an event time attribute. Otherwise, it returns 
> NULL. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22737) Add support for CURRENT_WATERMARK to SQL

2021-05-31 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-22737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ingo Bürk updated FLINK-22737:
--
Attachment: screenshot-2021-05-31_14-22-43.png

> Add support for CURRENT_WATERMARK to SQL
> 
>
> Key: FLINK-22737
> URL: https://issues.apache.org/jira/browse/FLINK-22737
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: David Anderson
>Assignee: Ingo Bürk
>Priority: Major
> Attachments: screenshot-2021-05-31_14-22-43.png
>
>
> With a built-in function returning the current watermark, one could operate 
> on late events without resorting to using the DataStream API.
> Called with zero parameters, this function returns the current watermark for 
> the current row – if there is an event time attribute. Otherwise, it returns 
> NULL. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16029:
URL: https://github.com/apache/flink/pull/16029#issuecomment-851388620


   
   ## CI report:
   
   * 2d98e97a844ff39a858c9500ff0a8cf49b8ca055 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18474)
 
   * de104833b8489a48639e8f4d5000b344f756c2ed Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18479)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14820: [FLINK-21085][runtime][checkpoint] Allows triggering checkpoint barrier handler for non-source tasks

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #14820:
URL: https://github.com/apache/flink/pull/14820#issuecomment-770327711


   
   ## CI report:
   
   * d83bee7799051ef32e0f9ec9e26339b23b7dc057 UNKNOWN
   * 045971a4900dc6798be00108c8e639e1c77fa18c Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13940)
 
   * 05c48a82487c41963697e99a31baeab9e3c82240 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18478)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JasonLeeCoding commented on a change in pull request #16028: [FLINK-22795] Throw better exception when executing remote SQL file in SQL Client

2021-05-31 Thread GitBox


JasonLeeCoding commented on a change in pull request #16028:
URL: https://github.com/apache/flink/pull/16028#discussion_r642783697



##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java
##
@@ -213,6 +214,16 @@ protected static void startClient(String[] args, 
Supplier terminalFact
 }
 }
 
+public static void checkArgs(String[] args) {
+for (String arg : args) {
+if (arg.trim().toLowerCase().startsWith("hdfs")) {

Review comment:
   OK, I will modify it to this way.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16029:
URL: https://github.com/apache/flink/pull/16029#issuecomment-851388620


   
   ## CI report:
   
   * 2d98e97a844ff39a858c9500ff0a8cf49b8ca055 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18474)
 
   * de104833b8489a48639e8f4d5000b344f756c2ed UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16016: [FLINK-22796] Update mem_setup_tm documentation

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16016:
URL: https://github.com/apache/flink/pull/16016#issuecomment-850273330


   
   ## CI report:
   
   * 8e520a025e21f3f14d789f9326fa5c7782336a43 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18389)
 
   * e1179f6ba7e27e5241b8c304c3cea39293d8c4c7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18477)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JasonLeeCoding commented on a change in pull request #16028: [FLINK-22795] Throw better exception when executing remote SQL file in SQL Client

2021-05-31 Thread GitBox


JasonLeeCoding commented on a change in pull request #16028:
URL: https://github.com/apache/flink/pull/16028#discussion_r642775785



##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlExecutionException.java
##
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client;
+
+/** Exception thrown during the execution of SQL statements. */
+public class SqlExecutionException extends RuntimeException {

Review comment:
   SqlClientException is currently under the gateway package. If it is not 
placed outside, the error message may mislead the user.  for example, 
org.apache.flink.table.client.gateway.SqlExecutionException: Currently, Flink 
doesn't support HDFS path. You can use local path. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JasonLeeCoding commented on a change in pull request #16028: [FLINK-22795] Throw better exception when executing remote SQL file in SQL Client

2021-05-31 Thread GitBox


JasonLeeCoding commented on a change in pull request #16028:
URL: https://github.com/apache/flink/pull/16028#discussion_r642774290



##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/SqlClientTest.java
##
@@ -205,6 +205,17 @@ public void testExecuteSqlFile() throws Exception {
 }
 }
 
+@Test
+public void testExecuteSqlWithHDFSFile() throws Exception {
+List statements = Collections.singletonList("HELP;\n");
+String sqlFilePath = createSqlFile(statements, "test-sql.sql");
+String[] args = new String[] {"-f", "hdfs:/" + sqlFilePath};

Review comment:
   yes, u are rigth just need a path




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16029:
URL: https://github.com/apache/flink/pull/16029#issuecomment-851388620


   
   ## CI report:
   
   * 2d98e97a844ff39a858c9500ff0a8cf49b8ca055 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18474)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16016: [FLINK-22796] Update mem_setup_tm documentation

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16016:
URL: https://github.com/apache/flink/pull/16016#issuecomment-850273330


   
   ## CI report:
   
   * 8e520a025e21f3f14d789f9326fa5c7782336a43 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18389)
 
   * e1179f6ba7e27e5241b8c304c3cea39293d8c4c7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14820: [FLINK-21085][runtime][checkpoint] Allows triggering checkpoint barrier handler for non-source tasks

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #14820:
URL: https://github.com/apache/flink/pull/14820#issuecomment-770327711


   
   ## CI report:
   
   * d83bee7799051ef32e0f9ec9e26339b23b7dc057 UNKNOWN
   * 045971a4900dc6798be00108c8e639e1c77fa18c Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13940)
 
   * 05c48a82487c41963697e99a31baeab9e3c82240 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-21905) Hive streaming source should use FIFO FileSplitAssigner instead of LIFO

2021-05-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-21905:
---

Assignee: luoyuxia

> Hive streaming source should use FIFO FileSplitAssigner instead of LIFO
> ---
>
> Key: FLINK-21905
> URL: https://issues.apache.org/jira/browse/FLINK-21905
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Jark Wu
>Assignee: luoyuxia
>Priority: Major
>
> Currently, Hive streaming source uses {{SimpleAssigner}} which hands out 
> splits in LIFO order. However, it will result in out-of-order partition 
> reading even if we add partition splits in order. Therefore, we should use a 
> FIFO order {{FileSplitAssigner}} instead. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22689) Table API Documentation Row-Based Operations Example Fails

2021-05-31 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354785#comment-17354785
 ] 

Jark Wu edited comment on FLINK-22689 at 6/1/21, 4:11 AM:
--

Fixed in 
- master: 997fa27e462c612f70401939c0fa9b217490dd73
- release-1.13: dab3cf240ada16aa754f7855dff4a364abf91e5f


was (Author: jark):
Fixed in master: 997fa27e462c612f70401939c0fa9b217490dd73

> Table API Documentation Row-Based Operations Example Fails
> --
>
> Key: FLINK-22689
> URL: https://issues.apache.org/jira/browse/FLINK-22689
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Yunfeng Zhou
>Assignee: Yao Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.13.2
>
>
> I wrote the following program according to the example code provided in 
> [Documentation/Table API/Row-based 
> operations|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/tableapi/#row-based-operations]
> {code:java}
> public class TableUDF {
>     public static void main(String[] args) {
>         StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.createLocalEnvironment();
>         StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
> ​
>         Table input = tEnv.fromValues(
>                 DataTypes.of("ROW"),
>                 Row.of("name")
>         );
> ​
>         ScalarFunction func = new MyMapFunction();
>         tEnv.registerFunction("func", func);
> ​
>         Table table = input
>                 .map(call("func", $("c")).as("a", "b")); // exception occurs 
> here
> ​
>         table.execute().print();
>     }
> ​
>     public static class MyMapFunction extends ScalarFunction {
>         public Row eval(String a) {
>             return Row.of(a, "pre-" + a);
>         }
> ​
>         @Override
>         public TypeInformation getResultType(Class[] signature) {
>             return Types.ROW(Types.STRING, Types.STRING);
>         }
>     }
> }
> {code}
> The code above would throw an exception like this:
> {code}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Only a scalar function can be used in the map operator.
>   at 
> org.apache.flink.table.operations.utils.OperationTreeBuilder.map(OperationTreeBuilder.java:480)
>   at org.apache.flink.table.api.internal.TableImpl.map(TableImpl.java:519)
>   at org.apache.flink.ml.common.function.TableUDFBug.main(TableUDF.java:29)
> {code}
>   The core of the program above is identical to that provided in flink 
> documentation, but it cannot function correctly. This might affect users who 
> want to use custom function with table API.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22689) Table API Documentation Row-Based Operations Example Fails

2021-05-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-22689:

Fix Version/s: 1.13.2

> Table API Documentation Row-Based Operations Example Fails
> --
>
> Key: FLINK-22689
> URL: https://issues.apache.org/jira/browse/FLINK-22689
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Yunfeng Zhou
>Assignee: Yao Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.13.2
>
>
> I wrote the following program according to the example code provided in 
> [Documentation/Table API/Row-based 
> operations|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/tableapi/#row-based-operations]
> {code:java}
> public class TableUDF {
>     public static void main(String[] args) {
>         StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.createLocalEnvironment();
>         StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
> ​
>         Table input = tEnv.fromValues(
>                 DataTypes.of("ROW"),
>                 Row.of("name")
>         );
> ​
>         ScalarFunction func = new MyMapFunction();
>         tEnv.registerFunction("func", func);
> ​
>         Table table = input
>                 .map(call("func", $("c")).as("a", "b")); // exception occurs 
> here
> ​
>         table.execute().print();
>     }
> ​
>     public static class MyMapFunction extends ScalarFunction {
>         public Row eval(String a) {
>             return Row.of(a, "pre-" + a);
>         }
> ​
>         @Override
>         public TypeInformation getResultType(Class[] signature) {
>             return Types.ROW(Types.STRING, Types.STRING);
>         }
>     }
> }
> {code}
> The code above would throw an exception like this:
> {code}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Only a scalar function can be used in the map operator.
>   at 
> org.apache.flink.table.operations.utils.OperationTreeBuilder.map(OperationTreeBuilder.java:480)
>   at org.apache.flink.table.api.internal.TableImpl.map(TableImpl.java:519)
>   at org.apache.flink.ml.common.function.TableUDFBug.main(TableUDF.java:29)
> {code}
>   The core of the program above is identical to that provided in flink 
> documentation, but it cannot function correctly. This might affect users who 
> want to use custom function with table API.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-22689) Table API Documentation Row-Based Operations Example Fails

2021-05-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-22689.
---
Fix Version/s: 1.14.0
   Resolution: Fixed

Fixed in master: 997fa27e462c612f70401939c0fa9b217490dd73

> Table API Documentation Row-Based Operations Example Fails
> --
>
> Key: FLINK-22689
> URL: https://issues.apache.org/jira/browse/FLINK-22689
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Yunfeng Zhou
>Assignee: Yao Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> I wrote the following program according to the example code provided in 
> [Documentation/Table API/Row-based 
> operations|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/tableapi/#row-based-operations]
> {code:java}
> public class TableUDF {
>     public static void main(String[] args) {
>         StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.createLocalEnvironment();
>         StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
> ​
>         Table input = tEnv.fromValues(
>                 DataTypes.of("ROW"),
>                 Row.of("name")
>         );
> ​
>         ScalarFunction func = new MyMapFunction();
>         tEnv.registerFunction("func", func);
> ​
>         Table table = input
>                 .map(call("func", $("c")).as("a", "b")); // exception occurs 
> here
> ​
>         table.execute().print();
>     }
> ​
>     public static class MyMapFunction extends ScalarFunction {
>         public Row eval(String a) {
>             return Row.of(a, "pre-" + a);
>         }
> ​
>         @Override
>         public TypeInformation getResultType(Class[] signature) {
>             return Types.ROW(Types.STRING, Types.STRING);
>         }
>     }
> }
> {code}
> The code above would throw an exception like this:
> {code}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Only a scalar function can be used in the map operator.
>   at 
> org.apache.flink.table.operations.utils.OperationTreeBuilder.map(OperationTreeBuilder.java:480)
>   at org.apache.flink.table.api.internal.TableImpl.map(TableImpl.java:519)
>   at org.apache.flink.ml.common.function.TableUDFBug.main(TableUDF.java:29)
> {code}
>   The core of the program above is identical to that provided in flink 
> documentation, but it cannot function correctly. This might affect users who 
> want to use custom function with table API.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #16036: [FLINK-22689][doc] Table API Documentation Row-Based Operations Examp…

2021-05-31 Thread GitBox


wuchong merged pull request #16036:
URL: https://github.com/apache/flink/pull/16036


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-21741) Support SHOW JARS statement in SQL Client

2021-05-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-21741.
---
Fix Version/s: 1.14.0
 Assignee: Nicholas Jiang
   Resolution: Fixed

Implemented in master: 6668d3be2edaea9a775f00336abd5e2fa8929a34

> Support SHOW JARS statement in SQL Client
> -
>
> Key: FLINK-21741
> URL: https://issues.apache.org/jira/browse/FLINK-21741
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client
>Reporter: Jark Wu
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #16010: [FLINK-21741][sql-client] Support SHOW JARS statement in SQL Client

2021-05-31 Thread GitBox


wuchong merged pull request #16010:
URL: https://github.com/apache/flink/pull/16010


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #16010: [FLINK-21741][sql-client] Support SHOW JARS statement in SQL Client

2021-05-31 Thread GitBox


wuchong commented on pull request #16010:
URL: https://github.com/apache/flink/pull/16010#issuecomment-851789262


   The failed case is not related to this PR. Will merge this. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pensz removed a comment on pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


pensz removed a comment on pull request #16029:
URL: https://github.com/apache/flink/pull/16029#issuecomment-851783298


   ` mvn spotless:apply` test success in my local. I will check in an another 
environment.
   
   ```
   [INFO] Reactor Summary for Flink : 1.14-SNAPSHOT:
   [INFO]
   [INFO] Flink : Tools : Force Shading .. SUCCESS [  0.081 
s]
   [INFO] Flink :  SUCCESS [  0.284 
s]
   [INFO] Flink : Annotations  SUCCESS [  0.470 
s]
   [INFO] Flink : Test utils : ... SUCCESS [  0.006 
s]
   [INFO] Flink : Test utils : Junit . SUCCESS [  0.309 
s]
   [INFO] Flink : Metrics : .. SUCCESS [  0.015 
s]
   [INFO] Flink : Metrics : Core . SUCCESS [  0.166 
s]
   [INFO] Flink : Core ... SUCCESS [  4.256 
s]
   [INFO] Flink : Java ... SUCCESS [  0.854 
s]
   [INFO] Flink : Queryable state : .. SUCCESS [  0.003 
s]
   [INFO] Flink : Queryable state : Client Java .. SUCCESS [  0.117 
s]
   [INFO] Flink : FileSystems : .. SUCCESS [  0.006 
s]
   [INFO] Flink : FileSystems : Hadoop FS  SUCCESS [  0.060 
s]
   [INFO] Flink : Runtime  SUCCESS [  7.899 
s]
   [INFO] Flink : Scala .. SUCCESS [  0.044 
s]
   [INFO] Flink : FileSystems : Mapr FS .. SUCCESS [  0.013 
s]
   [INFO] Flink : FileSystems : Hadoop FS shaded . SUCCESS [  0.056 
s]
   [INFO] Flink : FileSystems : S3 FS Base ... SUCCESS [  0.107 
s]
   [INFO] Flink : FileSystems : S3 FS Hadoop . SUCCESS [  0.027 
s]
   [INFO] Flink : FileSystems : S3 FS Presto . SUCCESS [  0.025 
s]
   [INFO] Flink : FileSystems : OSS FS ... SUCCESS [  0.010 
s]
   [INFO] Flink : FileSystems : Azure FS Hadoop .. SUCCESS [  0.015 
s]
   [INFO] Flink : Optimizer .. SUCCESS [  0.653 
s]
   [INFO] Flink : Connectors : ... SUCCESS [  0.026 
s]
   [INFO] Flink : Connectors : File Sink Common .. SUCCESS [  0.038 
s]
   [INFO] Flink : Streaming Java . SUCCESS [  2.555 
s]
   [INFO] Flink : Clients  SUCCESS [  0.282 
s]
   [INFO] Flink : Test utils : Utils . SUCCESS [  0.074 
s]
   [INFO] Flink : Runtime web  SUCCESS [  0.137 
s]
   [INFO] Flink : Examples : . SUCCESS [  0.006 
s]
   [INFO] Flink : Examples : Batch ... SUCCESS [  0.079 
s]
   [INFO] Flink : Connectors : Hadoop compatibility .. SUCCESS [  0.114 
s]
   [INFO] Flink : State backends : ... SUCCESS [  0.003 
s]
   [INFO] Flink : State backends : RocksDB ... SUCCESS [  0.276 
s]
   [INFO] Flink : State backends : Changelog . SUCCESS [  0.029 
s]
   [INFO] Flink : Tests .. SUCCESS [  0.970 
s]
   [INFO] Flink : Streaming Scala  SUCCESS [  0.009 
s]
   [INFO] Flink : Connectors : HCatalog .. SUCCESS [  0.013 
s]
   [INFO] Flink : Test utils : Connectors  SUCCESS [  0.013 
s]
   [INFO] Flink : Connectors : Base .. SUCCESS [  0.073 
s]
   [INFO] Flink : Connectors : Files . SUCCESS [  0.198 
s]
   [INFO] Flink : Table :  SUCCESS [  0.050 
s]
   [INFO] Flink : Table : Common . SUCCESS [  1.132 
s]
   [INFO] Flink : Table : API Java ... SUCCESS [  0.513 
s]
   [INFO] Flink : Table : API Java bridge  SUCCESS [  0.135 
s]
   [INFO] Flink : Table : API Scala .. SUCCESS [  0.002 
s]
   [INFO] Flink : Table : API Scala bridge ... SUCCESS [  0.007 
s]
   [INFO] Flink : Table : SQL Parser . SUCCESS [  0.127 
s]
   [INFO] Flink : Libraries :  SUCCESS [  0.011 
s]
   [INFO] Flink : Libraries : CEP  SUCCESS [  0.448 
s]
   [INFO] Flink : Table : Planner  SUCCESS [  0.362 
s]
   [INFO] Flink : Formats : .. SUCCESS [  0.009 
s]
   [INFO] Flink : Format : Common  SUCCESS [  0.006 
s]
   [INFO] Flink : Table : SQL Parser Hive  SUCCESS [  0.047 
s]
   [INFO] Flink : Table : Runtime Blink .. SUCCESS [  1.490 
s]
   [INFO] Flink : Table : Planner Blink .. SUCCESS [  1.485 
s]
   

[jira] [Commented] (FLINK-21905) Hive streaming source should use FIFO FileSplitAssigner instead of LIFO

2021-05-31 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354781#comment-17354781
 ] 

luoyuxia commented on FLINK-21905:
--

[~jark] Could you assign this issue to me?

> Hive streaming source should use FIFO FileSplitAssigner instead of LIFO
> ---
>
> Key: FLINK-21905
> URL: https://issues.apache.org/jira/browse/FLINK-21905
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Jark Wu
>Priority: Major
>
> Currently, Hive streaming source uses {{SimpleAssigner}} which hands out 
> splits in LIFO order. However, it will result in out-of-order partition 
> reading even if we add partition splits in order. Therefore, we should use a 
> FIFO order {{FileSplitAssigner}} instead. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22806) The folder /checkpoint/shared state of FLINKSQL is getting bigger and bigger

2021-05-31 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354780#comment-17354780
 ] 

Jark Wu commented on FLINK-22806:
-

Hi [~yunta], could you post the link here so others can learn from this? 

> The folder /checkpoint/shared state of  FLINKSQL is getting bigger and bigger
> -
>
> Key: FLINK-22806
> URL: https://issues.apache.org/jira/browse/FLINK-22806
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.12.0
>Reporter: YUJIANBO
>Priority: Major
>
> I have added the parameter :
>1、tableEnv.getConfig().setIdleStateRetention:   3600 (one hour),  
>2、state.checkpoints.num-retained:3
>3、sql:
> {code:sql}
> // demo:  
> select count(1),  LISTAGG(concat(m,n)) from tabeA group by a, b, 
> time_minute
> //details:
> CREATE TABLE user_behavior (
>`request_ip` STRING,
>`request_time` BIGINT,
>`header` STRING ,
>// timestamp  converted to specific per minute   
>`t_min` as cast(`request_time`-(`request_time` + 2880)%6 as 
> BIGINT),
>`ts` as TO_TIMESTAMP(FROM_UNIXTIME(`request_time`/1000-28800,'-MM-dd 
> HH:mm:ss')),
>WATERMARK FOR `ts` AS `ts` - INTERVAL '60' MINUTE) 
> with (
>'connector' = 'kafka',
> 
> );
> CREATE TABLE blackhole_table (
>`cnt` BIGINT,
>`lists` STRING
> ) WITH (
>  'connector' = 'blackhole'
> );
> insert into blackhole_table 
> select 
> count(*) as cnt, 
> LISTAGG(concat(`request_ip`, `header`, cast(`request_time` as STRING))) 
> as lists
> from user_behavior 
> group by `request_ip`,`header`,`t_min`;
> {code}
>4、state.backend: rocksdb  
>5、state.backend.incremental is true
> I set the checkpoint state for one hour, but the size of the folder directory 
> /checkpoint/shared is still growing.  I observed it for two days and guessed 
> that there was expired data in the  /checkpoint/shared folder that had not 
> been cleared?
> What else can limit the growth of state?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zuston commented on pull request #15653: [FLINK-22329][hive] Inject current ugi credentials into jobconf when getting file split in hive connector

2021-05-31 Thread GitBox


zuston commented on pull request #15653:
URL: https://github.com/apache/flink/pull/15653#issuecomment-851787011


   If you have time, could you help review again? Thanks @lirui-apache 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21086) Make checkpointBarrierHandler Support EndOfPartition

2021-05-31 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-21086:

Summary: Make checkpointBarrierHandler Support EndOfPartition  (was: 
CheckpointBarrierHandler Insert barriers for channels received EndOfPartition)

> Make checkpointBarrierHandler Support EndOfPartition
> 
>
> Key: FLINK-21086
> URL: https://issues.apache.org/jira/browse/FLINK-21086
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Runtime / Checkpointing
>Reporter: Yun Gao
>Priority: Major
>  Labels: pull-request-available
>
> For a non-source task, if one of its precedent task has finished, the 
> precedent task would send EndOfPartition to it. Then for checkpoint after 
> that, this task would not receive the barrier from the channel that has sent 
> EndOfPartition. To finish the alignment, CheckpointBarrierHandler would 
> insert barriers before EndOfPartition for these channels.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] tony810430 commented on pull request #16016: [FLINK-22796] Update mem_setup_tm documentation

2021-05-31 Thread GitBox


tony810430 commented on pull request #16016:
URL: https://github.com/apache/flink/pull/16016#issuecomment-851785081


   Hi @xintongsong 
   I have addressed your comments. what do you think for these update?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pensz commented on pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


pensz commented on pull request #16029:
URL: https://github.com/apache/flink/pull/16029#issuecomment-851783298


   ` mvn spotless:apply` test success in my local. I will check in an another 
environment.
   
   ```
   [INFO] Reactor Summary for Flink : 1.14-SNAPSHOT:
   [INFO]
   [INFO] Flink : Tools : Force Shading .. SUCCESS [  0.081 
s]
   [INFO] Flink :  SUCCESS [  0.284 
s]
   [INFO] Flink : Annotations  SUCCESS [  0.470 
s]
   [INFO] Flink : Test utils : ... SUCCESS [  0.006 
s]
   [INFO] Flink : Test utils : Junit . SUCCESS [  0.309 
s]
   [INFO] Flink : Metrics : .. SUCCESS [  0.015 
s]
   [INFO] Flink : Metrics : Core . SUCCESS [  0.166 
s]
   [INFO] Flink : Core ... SUCCESS [  4.256 
s]
   [INFO] Flink : Java ... SUCCESS [  0.854 
s]
   [INFO] Flink : Queryable state : .. SUCCESS [  0.003 
s]
   [INFO] Flink : Queryable state : Client Java .. SUCCESS [  0.117 
s]
   [INFO] Flink : FileSystems : .. SUCCESS [  0.006 
s]
   [INFO] Flink : FileSystems : Hadoop FS  SUCCESS [  0.060 
s]
   [INFO] Flink : Runtime  SUCCESS [  7.899 
s]
   [INFO] Flink : Scala .. SUCCESS [  0.044 
s]
   [INFO] Flink : FileSystems : Mapr FS .. SUCCESS [  0.013 
s]
   [INFO] Flink : FileSystems : Hadoop FS shaded . SUCCESS [  0.056 
s]
   [INFO] Flink : FileSystems : S3 FS Base ... SUCCESS [  0.107 
s]
   [INFO] Flink : FileSystems : S3 FS Hadoop . SUCCESS [  0.027 
s]
   [INFO] Flink : FileSystems : S3 FS Presto . SUCCESS [  0.025 
s]
   [INFO] Flink : FileSystems : OSS FS ... SUCCESS [  0.010 
s]
   [INFO] Flink : FileSystems : Azure FS Hadoop .. SUCCESS [  0.015 
s]
   [INFO] Flink : Optimizer .. SUCCESS [  0.653 
s]
   [INFO] Flink : Connectors : ... SUCCESS [  0.026 
s]
   [INFO] Flink : Connectors : File Sink Common .. SUCCESS [  0.038 
s]
   [INFO] Flink : Streaming Java . SUCCESS [  2.555 
s]
   [INFO] Flink : Clients  SUCCESS [  0.282 
s]
   [INFO] Flink : Test utils : Utils . SUCCESS [  0.074 
s]
   [INFO] Flink : Runtime web  SUCCESS [  0.137 
s]
   [INFO] Flink : Examples : . SUCCESS [  0.006 
s]
   [INFO] Flink : Examples : Batch ... SUCCESS [  0.079 
s]
   [INFO] Flink : Connectors : Hadoop compatibility .. SUCCESS [  0.114 
s]
   [INFO] Flink : State backends : ... SUCCESS [  0.003 
s]
   [INFO] Flink : State backends : RocksDB ... SUCCESS [  0.276 
s]
   [INFO] Flink : State backends : Changelog . SUCCESS [  0.029 
s]
   [INFO] Flink : Tests .. SUCCESS [  0.970 
s]
   [INFO] Flink : Streaming Scala  SUCCESS [  0.009 
s]
   [INFO] Flink : Connectors : HCatalog .. SUCCESS [  0.013 
s]
   [INFO] Flink : Test utils : Connectors  SUCCESS [  0.013 
s]
   [INFO] Flink : Connectors : Base .. SUCCESS [  0.073 
s]
   [INFO] Flink : Connectors : Files . SUCCESS [  0.198 
s]
   [INFO] Flink : Table :  SUCCESS [  0.050 
s]
   [INFO] Flink : Table : Common . SUCCESS [  1.132 
s]
   [INFO] Flink : Table : API Java ... SUCCESS [  0.513 
s]
   [INFO] Flink : Table : API Java bridge  SUCCESS [  0.135 
s]
   [INFO] Flink : Table : API Scala .. SUCCESS [  0.002 
s]
   [INFO] Flink : Table : API Scala bridge ... SUCCESS [  0.007 
s]
   [INFO] Flink : Table : SQL Parser . SUCCESS [  0.127 
s]
   [INFO] Flink : Libraries :  SUCCESS [  0.011 
s]
   [INFO] Flink : Libraries : CEP  SUCCESS [  0.448 
s]
   [INFO] Flink : Table : Planner  SUCCESS [  0.362 
s]
   [INFO] Flink : Formats : .. SUCCESS [  0.009 
s]
   [INFO] Flink : Format : Common  SUCCESS [  0.006 
s]
   [INFO] Flink : Table : SQL Parser Hive  SUCCESS [  0.047 
s]
   [INFO] Flink : Table : Runtime Blink .. SUCCESS [  1.490 
s]
   [INFO] Flink : Table : Planner Blink .. SUCCESS [  1.485 
s]
   [INFO] 

[GitHub] [flink] flinkbot edited a comment on pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16029:
URL: https://github.com/apache/flink/pull/16029#issuecomment-851388620


   
   ## CI report:
   
   * 08a786455ddca7efb86a77dae27511f050d728f6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18471)
 
   * 2d98e97a844ff39a858c9500ff0a8cf49b8ca055 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18474)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-10205) Batch Job: InputSplit Fault tolerant for DataSourceTask

2021-05-31 Thread wangwj (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354777#comment-17354777
 ] 

wangwj commented on FLINK-10205:


[~Ryantaocer][~isunjin]
Hi, excuse me.
After I read this issue detailed, I have a question that in batch job although 
each inputsplit will be processed exactly once, but  when a task failover, it 
maybe not process the same inputsplit before failover after this patch merged.
Does this problem still exist?
I am working in speculative execution, so I want to discuess with you.
Thanks~


> Batch Job: InputSplit Fault tolerant for DataSourceTask
> ---
>
> Key: FLINK-10205
> URL: https://issues.apache.org/jira/browse/FLINK-10205
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.6.1, 1.6.2, 1.7.0
>Reporter: JIN SUN
>Assignee: ryantaocer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>   Original Estimate: 168h
>  Time Spent: 0.5h
>  Remaining Estimate: 167.5h
>
> Today DataSource Task pull InputSplits from JobManager to achieve better 
> performance, however, when a DataSourceTask failed and rerun, it will not get 
> the same splits as its previous version. this will introduce inconsistent 
> result or even data corruption.
> Furthermore,  if there are two executions run at the same time (in batch 
> scenario), this two executions should process same splits.
> we need to fix the issue to make the inputs of a DataSourceTask 
> deterministic. The propose is save all splits into ExecutionVertex and 
> DataSourceTask will pull split from there.
>  document:
> [https://docs.google.com/document/d/1FdZdcA63tPUEewcCimTFy9Iz2jlVlMRANZkO4RngIuk/edit?usp=sharing]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong commented on a change in pull request #16016: [FLINK-22796] Update mem_setup_tm documentation

2021-05-31 Thread GitBox


xintongsong commented on a change in pull request #16016:
URL: https://github.com/apache/flink/pull/16016#discussion_r642757794



##
File path: docs/content/docs/deployment/memory/mem_setup_tm.md
##
@@ -98,13 +98,14 @@ See also [how to configure memory for state backends]({{< 
ref "docs/deployment/m
 If your job contains multiple types of managed memory consumers, you can also 
control how managed memory should be shared across these types.
 The configuration option [`taskmanager.memory.managed.consumer-weights`]({{< 
ref "docs/deployment/config" >}}#taskmanager-memory-managed-consumer-weights) 
allows you to set a weight for each type, to which Flink will reserve managed 
memory proportionally.
 Valid consumer types are:
-* `DATAPROC`: for RocksDB state backend in streaming and built-in algorithms 
in batch.
+* `OPERATOR`: for built-in algorithms in streaming or batch.

Review comment:
   Similar to `PYTHON`, we can omit "in streaming or batch"
   ```suggestion
   * `OPERATOR`: for built-in algorithms.
   ```

##
File path: docs/content/docs/deployment/memory/mem_setup_tm.md
##
@@ -98,13 +98,14 @@ See also [how to configure memory for state backends]({{< 
ref "docs/deployment/m
 If your job contains multiple types of managed memory consumers, you can also 
control how managed memory should be shared across these types.
 The configuration option [`taskmanager.memory.managed.consumer-weights`]({{< 
ref "docs/deployment/config" >}}#taskmanager-memory-managed-consumer-weights) 
allows you to set a weight for each type, to which Flink will reserve managed 
memory proportionally.
 Valid consumer types are:
-* `DATAPROC`: for RocksDB state backend in streaming and built-in algorithms 
in batch.
+* `OPERATOR`: for built-in algorithms in streaming or batch.

Review comment:
   > Batch jobs can use it for sorting, hash tables, caching of 
intermediate results.
   
   I think we should also update for this sentence, since now it applies to 
both streaming and batch.
   
   (Github does not allow me to comment on lines that are not changed by this 
PR. :( )




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16034: [FLINK-22794][table-api]Upgrade JUnit Vintage for flink-sql-parser and flink-sql-hive-parser

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16034:
URL: https://github.com/apache/flink/pull/16034#issuecomment-851582958


   
   ## CI report:
   
   * 777b9e65a26b6dbca69c7a9bae35bb8baf90d327 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18460)
 
   * 944ab7cb26cff18ae1d6b036e66afe4d45258ff4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18473)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16029:
URL: https://github.com/apache/flink/pull/16029#issuecomment-851388620


   
   ## CI report:
   
   * 08a786455ddca7efb86a77dae27511f050d728f6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18471)
 
   * 2d98e97a844ff39a858c9500ff0a8cf49b8ca055 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16036: [FLINK-22689][doc] Table API Documentation Row-Based Operations Examp…

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16036:
URL: https://github.com/apache/flink/pull/16036#issuecomment-851746085


   
   ## CI report:
   
   * 484f4fed68bd22ecfe249e4fc909da2a23e0d826 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18470)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15524: [FLINK-21667][runtime] Defer starting ResourceManager to after obtaining leadership.

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #15524:
URL: https://github.com/apache/flink/pull/15524#issuecomment-815574117


   
   ## CI report:
   
   * f5d1cd207a362b395ad1628b25b40f9abc2caaf9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16767)
 
   * 9867b73640b26f7072bf4fe61dfe0f5a49765a94 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18472)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-22806) The folder /checkpoint/shared state of FLINKSQL is getting bigger and bigger

2021-05-31 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang closed FLINK-22806.

Resolution: Information Provided

> The folder /checkpoint/shared state of  FLINKSQL is getting bigger and bigger
> -
>
> Key: FLINK-22806
> URL: https://issues.apache.org/jira/browse/FLINK-22806
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.12.0
>Reporter: YUJIANBO
>Priority: Major
>
> I have added the parameter :
>1、tableEnv.getConfig().setIdleStateRetention:   3600 (one hour),  
>2、state.checkpoints.num-retained:3
>3、sql:
> {code:sql}
> // demo:  
> select count(1),  LISTAGG(concat(m,n)) from tabeA group by a, b, 
> time_minute
> //details:
> CREATE TABLE user_behavior (
>`request_ip` STRING,
>`request_time` BIGINT,
>`header` STRING ,
>// timestamp  converted to specific per minute   
>`t_min` as cast(`request_time`-(`request_time` + 2880)%6 as 
> BIGINT),
>`ts` as TO_TIMESTAMP(FROM_UNIXTIME(`request_time`/1000-28800,'-MM-dd 
> HH:mm:ss')),
>WATERMARK FOR `ts` AS `ts` - INTERVAL '60' MINUTE) 
> with (
>'connector' = 'kafka',
> 
> );
> CREATE TABLE blackhole_table (
>`cnt` BIGINT,
>`lists` STRING
> ) WITH (
>  'connector' = 'blackhole'
> );
> insert into blackhole_table 
> select 
> count(*) as cnt, 
> LISTAGG(concat(`request_ip`, `header`, cast(`request_time` as STRING))) 
> as lists
> from user_behavior 
> group by `request_ip`,`header`,`t_min`;
> {code}
>4、state.backend: rocksdb  
>5、state.backend.incremental is true
> I set the checkpoint state for one hour, but the size of the folder directory 
> /checkpoint/shared is still growing.  I observed it for two days and guessed 
> that there was expired data in the  /checkpoint/shared folder that had not 
> been cleared?
> What else can limit the growth of state?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22806) The folder /checkpoint/shared state of FLINKSQL is getting bigger and bigger

2021-05-31 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354767#comment-17354767
 ] 

Yun Tang edited comment on FLINK-22806 at 6/1/21, 3:14 AM:
---

I noticed that you have already opened related issue in Chinese user mailing 
list, and I replied you in that thread.

I plan to close this ticket first as Apache JIRA is not somewhere to ask 
questions. If some internal bugs really exist, we could then create related 
ticket.


was (Author: yunta):
I noticed that you have already opened related issue in Chinese user mailing 
list, and replied in that thread.

I planed to close this ticket first as Apache JIRA is not somewhere to ask 
questions. If some internal bugs really exist, we could then create related 
ticket.

> The folder /checkpoint/shared state of  FLINKSQL is getting bigger and bigger
> -
>
> Key: FLINK-22806
> URL: https://issues.apache.org/jira/browse/FLINK-22806
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.12.0
>Reporter: YUJIANBO
>Priority: Major
>
> I have added the parameter :
>1、tableEnv.getConfig().setIdleStateRetention:   3600 (one hour),  
>2、state.checkpoints.num-retained:3
>3、sql:
> {code:sql}
> // demo:  
> select count(1),  LISTAGG(concat(m,n)) from tabeA group by a, b, 
> time_minute
> //details:
> CREATE TABLE user_behavior (
>`request_ip` STRING,
>`request_time` BIGINT,
>`header` STRING ,
>// timestamp  converted to specific per minute   
>`t_min` as cast(`request_time`-(`request_time` + 2880)%6 as 
> BIGINT),
>`ts` as TO_TIMESTAMP(FROM_UNIXTIME(`request_time`/1000-28800,'-MM-dd 
> HH:mm:ss')),
>WATERMARK FOR `ts` AS `ts` - INTERVAL '60' MINUTE) 
> with (
>'connector' = 'kafka',
> 
> );
> CREATE TABLE blackhole_table (
>`cnt` BIGINT,
>`lists` STRING
> ) WITH (
>  'connector' = 'blackhole'
> );
> insert into blackhole_table 
> select 
> count(*) as cnt, 
> LISTAGG(concat(`request_ip`, `header`, cast(`request_time` as STRING))) 
> as lists
> from user_behavior 
> group by `request_ip`,`header`,`t_min`;
> {code}
>4、state.backend: rocksdb  
>5、state.backend.incremental is true
> I set the checkpoint state for one hour, but the size of the folder directory 
> /checkpoint/shared is still growing.  I observed it for two days and guessed 
> that there was expired data in the  /checkpoint/shared folder that had not 
> been cleared?
> What else can limit the growth of state?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] fsk119 commented on pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


fsk119 commented on pull request #16029:
URL: https://github.com/apache/flink/pull/16029#issuecomment-851774064


   @pensz Hi, the CI test fails. Please run `mvn spotless:apply` to fix the 
checkstyle problem.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22806) The folder /checkpoint/shared state of FLINKSQL is getting bigger and bigger

2021-05-31 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354767#comment-17354767
 ] 

Yun Tang commented on FLINK-22806:
--

I noticed that you have already opened related issue in Chinese user mailing 
list, and replied in that thread.

I planed to close this ticket first as Apache JIRA is not somewhere to ask 
questions. If some internal bugs really exist, we could then create related 
ticket.

> The folder /checkpoint/shared state of  FLINKSQL is getting bigger and bigger
> -
>
> Key: FLINK-22806
> URL: https://issues.apache.org/jira/browse/FLINK-22806
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.12.0
>Reporter: YUJIANBO
>Priority: Major
>
> I have added the parameter :
>1、tableEnv.getConfig().setIdleStateRetention:   3600 (one hour),  
>2、state.checkpoints.num-retained:3
>3、sql:
> {code:sql}
> // demo:  
> select count(1),  LISTAGG(concat(m,n)) from tabeA group by a, b, 
> time_minute
> //details:
> CREATE TABLE user_behavior (
>`request_ip` STRING,
>`request_time` BIGINT,
>`header` STRING ,
>// timestamp  converted to specific per minute   
>`t_min` as cast(`request_time`-(`request_time` + 2880)%6 as 
> BIGINT),
>`ts` as TO_TIMESTAMP(FROM_UNIXTIME(`request_time`/1000-28800,'-MM-dd 
> HH:mm:ss')),
>WATERMARK FOR `ts` AS `ts` - INTERVAL '60' MINUTE) 
> with (
>'connector' = 'kafka',
> 
> );
> CREATE TABLE blackhole_table (
>`cnt` BIGINT,
>`lists` STRING
> ) WITH (
>  'connector' = 'blackhole'
> );
> insert into blackhole_table 
> select 
> count(*) as cnt, 
> LISTAGG(concat(`request_ip`, `header`, cast(`request_time` as STRING))) 
> as lists
> from user_behavior 
> group by `request_ip`,`header`,`t_min`;
> {code}
>4、state.backend: rocksdb  
>5、state.backend.incremental is true
> I set the checkpoint state for one hour, but the size of the folder directory 
> /checkpoint/shared is still growing.  I observed it for two days and guessed 
> that there was expired data in the  /checkpoint/shared folder that had not 
> been cleared?
> What else can limit the growth of state?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] pensz commented on a change in pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


pensz commented on a change in pull request #16029:
URL: https://github.com/apache/flink/pull/16029#discussion_r642753682



##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliUtilsTest.java
##
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli;
+
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+
+import static org.junit.Assert.assertTrue;
+
+/** Test {@link CliUtils}. */
+public class CliUtilsTest {
+
+@Rule public TemporaryFolder realFolder = new TemporaryFolder();
+
+@Rule public TemporaryFolder linkFolder = new TemporaryFolder();
+
+@Test
+public void testCreate() throws IOException {
+Path realDirHistoryFile = Paths.get(realFolder.getRoot().toString(), 
"history.file");
+CliUtils.createFile(realDirHistoryFile);
+assertTrue(Files.exists(realDirHistoryFile));
+
+Path link = Paths.get(linkFolder.getRoot().getAbsolutePath(), "link");
+Files.createSymbolicLink(link, realFolder.getRoot().toPath());
+Path linkDirHistoryFile = Paths.get(link.toAbsolutePath().toString(), 
"history.file");
+CliUtils.createFile(linkDirHistoryFile);
+assertTrue(Files.exists(linkDirHistoryFile));

Review comment:
   Added check on file under realFolder.

##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliUtilsTest.java
##
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli;
+
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+
+import static org.junit.Assert.assertTrue;
+
+/** Test {@link CliUtils}. */
+public class CliUtilsTest {
+
+@Rule public TemporaryFolder realFolder = new TemporaryFolder();
+
+@Rule public TemporaryFolder linkFolder = new TemporaryFolder();
+
+@Test
+public void testCreate() throws IOException {

Review comment:
   Updated. I will be careful next time.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on pull request #15524: [FLINK-21667][runtime] Defer starting ResourceManager to after obtaining leadership.

2021-05-31 Thread GitBox


xintongsong commented on pull request #15524:
URL: https://github.com/apache/flink/pull/15524#issuecomment-851772901


   @wangyang0918,
   I've addressed your comments, and rebased for conflict resolving. Please 
take another look.
   I've also opened FLINK-22816 for investigating the possibility of 
multi-leader-session.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-22816) Investigate feasibility of supporting multiple RM leader sessions within JM process

2021-05-31 Thread Xintong Song (Jira)
Xintong Song created FLINK-22816:


 Summary: Investigate feasibility of supporting multiple RM leader 
sessions within JM process
 Key: FLINK-22816
 URL: https://issues.apache.org/jira/browse/FLINK-22816
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Reporter: Xintong Song


In FLINK-21667, we decoupled RM leadership and lifecycle managements. RM is not 
started after obtaining leadership, and stopped on losing leadership.

Ideally, we may start and stop multiple RMs, as the process obtains and loses 
leadership. However, as discussed in the 
[PR|https://github.com/apache/flink/pull/15524#pullrequestreview-663987547], 
having a process to start multiple RMs may cause problems in some deployment 
modes. E.g., repeated AM registration is not allowed on Yarn.

We need to investigate for all deployments that:
- Whether having multiple leader sessions causes problems.
- If it does, what can we do to solve the problem.

For information, multi-leader-session support for RM has been implemented in 
FLINK-21667, but is disabled by default. To enable, add the system property 
"flink.tests.enable-rm-multi-leader-session". 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] fsk119 commented on a change in pull request #16028: [FLINK-22795] Throw better exception when executing remote SQL file in SQL Client

2021-05-31 Thread GitBox


fsk119 commented on a change in pull request #16028:
URL: https://github.com/apache/flink/pull/16028#discussion_r642751015



##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/SqlClientTest.java
##
@@ -205,6 +205,17 @@ public void testExecuteSqlFile() throws Exception {
 }
 }
 
+@Test
+public void testExecuteSqlWithHDFSFile() throws Exception {
+List statements = Collections.singletonList("HELP;\n");
+String sqlFilePath = createSqlFile(statements, "test-sql.sql");
+String[] args = new String[] {"-f", "hdfs:/" + sqlFilePath};

Review comment:
   Just a invalid file path is enough. We don't need to create a new file 
here.
   
   Should be `"hdfs://path/to/file"`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on a change in pull request #16028: [FLINK-22795] Throw better exception when executing remote SQL file in SQL Client

2021-05-31 Thread GitBox


fsk119 commented on a change in pull request #16028:
URL: https://github.com/apache/flink/pull/16028#discussion_r642749269



##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java
##
@@ -213,6 +214,16 @@ protected static void startClient(String[] args, 
Supplier terminalFact
 }
 }
 
+public static void checkArgs(String[] args) {
+for (String arg : args) {
+if (arg.trim().toLowerCase().startsWith("hdfs")) {

Review comment:
   There are other protocols, e.g. oss. Would better to check the protocol  
whether is "file". Also I think the validation should be like 
   
   ```
   public static void checkFilePath(String filePath) {
   Path path = new Path(filePath);
   String scheme = path.toUri().getScheme();
   if (scheme != null && !scheme.equals("file")) {
   throw new SqlClientException("SQL Client only supports to load 
files in local.");
   }
   }
   ```
   
   Please add the check around the 
[validation](https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java#L123)
 here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on a change in pull request #16028: [FLINK-22795] Throw better exception when executing remote SQL file in SQL Client

2021-05-31 Thread GitBox


fsk119 commented on a change in pull request #16028:
URL: https://github.com/apache/flink/pull/16028#discussion_r642749501



##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlExecutionException.java
##
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client;
+
+/** Exception thrown during the execution of SQL statements. */
+public class SqlExecutionException extends RuntimeException {

Review comment:
   I think `SqlClientException` is enough.

##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java
##
@@ -213,6 +214,16 @@ protected static void startClient(String[] args, 
Supplier terminalFact
 }
 }
 
+public static void checkArgs(String[] args) {
+for (String arg : args) {
+if (arg.trim().toLowerCase().startsWith("hdfs")) {

Review comment:
   It have other protocols, e.g. oss. Would better to check the protocol  
whether is "file". Also I think the validation should be like 
   
   ```
   public static void checkFilePath(String filePath) {
   Path path = new Path(filePath);
   String scheme = path.toUri().getScheme();
   if (scheme != null && !scheme.equals("file")) {
   throw new SqlClientException("SQL Client only supports to load 
files in local.");
   }
   }
   ```
   
   Please add the check around the 
[validation](https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/SqlClient.java#L123)
 here.

##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/SqlClientTest.java
##
@@ -205,6 +205,17 @@ public void testExecuteSqlFile() throws Exception {
 }
 }
 
+@Test
+public void testExecuteSqlWithHDFSFile() throws Exception {
+List statements = Collections.singletonList("HELP;\n");
+String sqlFilePath = createSqlFile(statements, "test-sql.sql");
+String[] args = new String[] {"-f", "hdfs:/" + sqlFilePath};

Review comment:
   Just use a invalid file path is enough. We don't need to create a new 
file here.
   
   Should be `"hdfs://path/to/file"`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16034: [FLINK-22794][table-api]Upgrade JUnit Vintage for flink-sql-parser and flink-sql-hive-parser

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16034:
URL: https://github.com/apache/flink/pull/16034#issuecomment-851582958


   
   ## CI report:
   
   * 777b9e65a26b6dbca69c7a9bae35bb8baf90d327 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18460)
 
   * 944ab7cb26cff18ae1d6b036e66afe4d45258ff4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16029:
URL: https://github.com/apache/flink/pull/16029#issuecomment-851388620


   
   ## CI report:
   
   * 08a786455ddca7efb86a77dae27511f050d728f6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18471)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15524: [FLINK-21667][runtime] Defer starting ResourceManager to after obtaining leadership.

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #15524:
URL: https://github.com/apache/flink/pull/15524#issuecomment-815574117


   
   ## CI report:
   
   * f5d1cd207a362b395ad1628b25b40f9abc2caaf9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16767)
 
   * 9867b73640b26f7072bf4fe61dfe0f5a49765a94 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on a change in pull request #16024: [FLINK-22038][table-planner-blink] Update TopN to be without rowNumber if rowNumber field is never used by the successor Calc

2021-05-31 Thread GitBox


godfreyhe commented on a change in pull request #16024:
URL: https://github.com/apache/flink/pull/16024#discussion_r642744349



##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/FlinkStreamRuleSets.scala
##
@@ -374,6 +374,8 @@ object FlinkStreamRuleSets {
 PythonCorrelateSplitRule.INSTANCE,
 // merge calc after calc transpose
 FlinkCalcMergeRule.INSTANCE,
+// remove output of rank number when it is not used by successor calc
+RedundantRankNumberColumnRemoveRule.INSTANCE,

Review comment:
   The solution for batch and stream are different ?

##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/FlinkBatchRuleSets.scala
##
@@ -307,6 +307,9 @@ object FlinkBatchRuleSets {
 CoreRules.PROJECT_TO_CALC,
 FlinkCalcMergeRule.INSTANCE,
 
+// remove output of rank number when it is not used by successor calc
+RedundantRankNumberColumnRemoveRule.INSTANCE,

Review comment:
   please move this rule close to other rank rules

##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/RedundantRankNumberColumnRemoveRule.scala
##
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical
+
+import org.apache.flink.table.planner.plan.nodes.logical.{FlinkLogicalCalc, 
FlinkLogicalRank}
+import org.apache.flink.table.planner.plan.utils.InputRefVisitor
+
+import org.apache.calcite.plan.RelOptRule.{any, operand}
+import org.apache.calcite.plan.{RelOptRule, RelOptRuleCall}
+import org.apache.calcite.rex.RexProgramBuilder
+
+import scala.collection.JavaConversions._
+
+/**
+  * Planner rule that removes the output column of rank number
+  * iff the rank number column is not used by successor calc.
+  */
+class RedundantRankNumberColumnRemoveRule
+  extends RelOptRule(
+operand(classOf[FlinkLogicalCalc],
+  operand(classOf[FlinkLogicalRank], any())),
+"RedundantRankNumberColumnRemoveRule") {
+
+  override def matches(call: RelOptRuleCall): Boolean = {
+val calc: FlinkLogicalCalc = call.rel(0)
+val rank: FlinkLogicalRank = call.rel(1)
+val rankNumberColumnIdx = rank.getRowType.getFieldCount - 1
+rank.outputRankNumber && !isFieldUsedByCalc(calc, rankNumberColumnIdx)
+  }
+
+  override def onMatch(call: RelOptRuleCall): Unit = {
+val calc: FlinkLogicalCalc = call.rel(0)
+val rank: FlinkLogicalRank = call.rel(1)
+val newRank = new FlinkLogicalRank(
+  rank.getCluster,
+  rank.getTraitSet,
+  rank.getInput,
+  rank.partitionKey,
+  rank.orderKey,
+  rank.rankType,
+  rank.rankRange,
+  rank.rankNumberType,
+  outputRankNumber = false)
+
+val rexBuilder = rank.getCluster.getRexBuilder
+val oldProgram = calc.getProgram
+val programBuilder = new RexProgramBuilder(newRank.getRowType, rexBuilder)
+oldProgram.getNamedProjects.foreach { pair =>
+  programBuilder.addProject(oldProgram.expandLocalRef(pair.left), 
pair.right)
+}
+if (oldProgram.getCondition != null) {
+  
programBuilder.addCondition(oldProgram.expandLocalRef(oldProgram.getCondition))
+}
+val rexProgram = programBuilder.getProgram

Review comment:
   we can use `RexProgramBuilder.create` to simplify these code

##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/RedundantRankNumberColumnRemoveRule.scala
##
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed 

[jira] [Comment Edited] (FLINK-16069) Creation of TaskDeploymentDescriptor can block main thread for long time

2021-05-31 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354757#comment-17354757
 ] 

Zhu Zhu edited comment on FLINK-16069 at 6/1/21, 2:54 AM:
--

Even if the main thread can have the highest priority, GC problem can still 
happen when serialized {{TaskDeploymentDescriptors}} are generated too fast and 
queued to be sent out. These queued {{TaskDeploymentDescriptors}} will cost 
lots of memory and cannot be GCed before sent out. Frequent young GC would 
consume lots of CPU. And more heap memory will be required or consecutive full 
GC could happen.


was (Author: zhuzh):
Even if the main thread can have the highest priority, GC problem can still 
happen when serialized {{TaskDeploymentDescriptor}}s are generated too fast and 
queued to be sent out. These queued {{TaskDeploymentDescriptor}}s will cost 
lots of memory and cannot be {{GC}}ed before sent out. Frequent young GC would 
consume lots of CPU. And more heap memory will be required or consecutive full 
GC could happen.

> Creation of TaskDeploymentDescriptor can block main thread for long time
> 
>
> Key: FLINK-16069
> URL: https://issues.apache.org/jira/browse/FLINK-16069
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: huweihua
>Priority: Major
> Attachments: FLINK-16069-POC-results
>
>
> The deploy of tasks will take long time when we submit a high parallelism 
> job. And Execution#deploy run in mainThread, so it will block JobMaster 
> process other akka messages, such as Heartbeat. The creation of 
> TaskDeploymentDescriptor take most of time. We can put the creation in future.
> For example, A job [source(8000)->sink(8000)], the total 16000 tasks from 
> SCHEDULED to DEPLOYING took more than 1mins. This caused the heartbeat of 
> TaskManager timeout and job never success.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16069) Creation of TaskDeploymentDescriptor can block main thread for long time

2021-05-31 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354757#comment-17354757
 ] 

Zhu Zhu commented on FLINK-16069:
-

Even if the main thread can have the highest priority, GC problem can still 
happen when serialized {{TaskDeploymentDescriptor}}s are generated too fast and 
queued to be sent out. These queued {{TaskDeploymentDescriptor}}s will cost 
lots of memory and cannot be {{GC}}ed before sent out. Frequent young GC would 
consume lots of CPU. And more heap memory will be required or consecutive full 
GC could happen.

> Creation of TaskDeploymentDescriptor can block main thread for long time
> 
>
> Key: FLINK-16069
> URL: https://issues.apache.org/jira/browse/FLINK-16069
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: huweihua
>Priority: Major
> Attachments: FLINK-16069-POC-results
>
>
> The deploy of tasks will take long time when we submit a high parallelism 
> job. And Execution#deploy run in mainThread, so it will block JobMaster 
> process other akka messages, such as Heartbeat. The creation of 
> TaskDeploymentDescriptor take most of time. We can put the creation in future.
> For example, A job [source(8000)->sink(8000)], the total 16000 tasks from 
> SCHEDULED to DEPLOYING took more than 1mins. This caused the heartbeat of 
> TaskManager timeout and job never success.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JasonLeeCoding commented on pull request #16028: [FLINK-22795] Throw better exception when executing remote SQL file in SQL Client

2021-05-31 Thread GitBox


JasonLeeCoding commented on pull request #16028:
URL: https://github.com/apache/flink/pull/16028#issuecomment-851766303


   @fsk119 Please help me review the code ,thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on pull request #15524: [FLINK-21667][runtime] Defer starting ResourceManager to after obtaining leadership.

2021-05-31 Thread GitBox


xintongsong commented on pull request #15524:
URL: https://github.com/apache/flink/pull/15524#issuecomment-851765702


   > What will currently happen if a RM loses the leadership? Will it simply 
terminate the process and let Yarn start a new process to reapply for the 
leadership?
   
   Yes, exactly.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #16034: [FLINK-22794][table-api]Upgrade JUnit Vintage for flink-sql-parser and flink-sql-hive-parser

2021-05-31 Thread GitBox


wuchong commented on pull request #16034:
URL: https://github.com/apache/flink/pull/16034#issuecomment-851765396


   It seems that the `flink-sql-parser` and `flink-sql-parser-hive` modules are 
never tested in CI build. I just added them into test modules, cc @twalthr 
@zentol . 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16029:
URL: https://github.com/apache/flink/pull/16029#issuecomment-851388620


   
   ## CI report:
   
   * 233774d84c03ea4a77b3433cf44dc12dcb3a8814 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18439)
 
   * 08a786455ddca7efb86a77dae27511f050d728f6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18471)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18958) Lose column comment when create table

2021-05-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-18958:

Labels: auto-unassigned pull-request-available  (was: auto-unassigned 
pull-request-available stale-major)

> Lose column comment when create table
> -
>
> Key: FLINK-18958
> URL: https://issues.apache.org/jira/browse/FLINK-18958
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
> Fix For: 1.14.0
>
>
> Currently, table column will not store column comment and user can't see 
> column comment when use {{describe table}} sql.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe commented on a change in pull request #15997: [FLINK-22680][table-planner-blink] Fix IndexOutOfBoundsException when apply WatermarkAssignerChangelogNormalizeTransposeRule

2021-05-31 Thread GitBox


godfreyhe commented on a change in pull request #15997:
URL: https://github.com/apache/flink/pull/15997#discussion_r642741007



##
File path: 
flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/rules/physical/stream/WatermarkAssignerChangelogNormalizeTransposeRuleTest.xml
##
@@ -0,0 +1,191 @@
+
+
+
+  
+
+  
+
+
+  
+
+
+  
+
+  
+  
+
+  
+
+
+  
+
+
+  
+
+  
+  
+
+  
+
+
+  
+
+
+  
+
+
+  
+
+
+  
+
+  
+  
+
+  
+
+
+  
+
+
+  
+
+  
+  
+
+  
+
+
+  
+
+
+  
+
+  
+  
+
+  
+
+
+  
+
+
+  

[GitHub] [flink] flinkbot edited a comment on pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16029:
URL: https://github.com/apache/flink/pull/16029#issuecomment-851388620


   
   ## CI report:
   
   * 233774d84c03ea4a77b3433cf44dc12dcb3a8814 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18439)
 
   * 08a786455ddca7efb86a77dae27511f050d728f6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16036: [FLINK-22689][doc] Table API Documentation Row-Based Operations Examp…

2021-05-31 Thread GitBox


flinkbot edited a comment on pull request #16036:
URL: https://github.com/apache/flink/pull/16036#issuecomment-851746085


   
   ## CI report:
   
   * 484f4fed68bd22ecfe249e4fc909da2a23e0d826 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18470)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on a change in pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


fsk119 commented on a change in pull request #16029:
URL: https://github.com/apache/flink/pull/16029#discussion_r642737314



##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliUtilsTest.java
##
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli;
+
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+
+import static org.junit.Assert.assertTrue;
+
+/** Test {@link CliUtils}. */
+public class CliUtilsTest {
+
+@Rule public TemporaryFolder realFolder = new TemporaryFolder();
+
+@Rule public TemporaryFolder linkFolder = new TemporaryFolder();
+
+@Test
+public void testCreate() throws IOException {
+Path realDirHistoryFile = Paths.get(realFolder.getRoot().toString(), 
"history.file");
+CliUtils.createFile(realDirHistoryFile);
+assertTrue(Files.exists(realDirHistoryFile));
+
+Path link = Paths.get(linkFolder.getRoot().getAbsolutePath(), "link");
+Files.createSymbolicLink(link, realFolder.getRoot().toPath());
+Path linkDirHistoryFile = Paths.get(link.toAbsolutePath().toString(), 
"history.file");
+CliUtils.createFile(linkDirHistoryFile);
+assertTrue(Files.exists(linkDirHistoryFile));

Review comment:
   Would be better to also check whether the file exists under the 
`Paths.get(realFolder.getRoot(), "history.file")`.

##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliUtilsTest.java
##
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli;
+
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+
+import static org.junit.Assert.assertTrue;
+
+/** Test {@link CliUtils}. */
+public class CliUtilsTest {
+
+@Rule public TemporaryFolder realFolder = new TemporaryFolder();
+
+@Rule public TemporaryFolder linkFolder = new TemporaryFolder();
+
+@Test
+public void testCreate() throws IOException {

Review comment:
   Would be better to rename to `testCreateFile`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on a change in pull request #15986: [FLINK-22155][table] EXPLAIN statement should validate insert and query separately

2021-05-31 Thread GitBox


fsk119 commented on a change in pull request #15986:
URL: https://github.com/apache/flink/pull/15986#discussion_r642734823



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/operations/SqlToOperationConverter.java
##
@@ -844,16 +843,19 @@ private Operation convertShowViews(SqlShowViews 
sqlShowViews) {
 return new ShowViewsOperation();
 }
 
-/** Convert EXPLAIN statement. */
-private Operation convertExplain(SqlExplain sqlExplain) {
-Operation operation = convertSqlQuery(sqlExplain.getExplicandum());
-
-if (sqlExplain.getDetailLevel() != SqlExplainLevel.EXPPLAN_ATTRIBUTES
-|| sqlExplain.getDepth() != SqlExplain.Depth.PHYSICAL
-|| sqlExplain.getFormat() != SqlExplainFormat.TEXT) {
-throw new TableException("Only default behavior is supported now, 
EXPLAIN PLAN FOR xx");
+/** Convert RICH EXPLAIN statement. */
+private Operation convertRichExplain(SqlRichExplain sqlExplain) {
+Operation operation;
+SqlNode sqlNode = sqlExplain.getStatement();
+if (sqlNode instanceof RichSqlInsert) {
+operation = convertSqlInsert((RichSqlInsert) sqlNode);
+} else if (sqlNode instanceof SqlSelect) {
+operation = convertSqlQuery(sqlExplain.getStatement());

Review comment:
   I mean using `SqlValidatorImpl#validate(insertStatement)` may have the 
different plans comparing to using `SqlToOperationConverter#convertSqlInsert`. 
The validation about query is the same...




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pensz commented on a change in pull request #16029: [FLINK-22786][sql-client] sql-client can not create .flink-sql-history file

2021-05-31 Thread GitBox


pensz commented on a change in pull request #16029:
URL: https://github.com/apache/flink/pull/16029#discussion_r642733457



##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliUtils.java
##
@@ -109,7 +109,7 @@ public static boolean createFile(final Path filePath) {
 if (parent == null) {
 return false;
 }
-Files.createDirectories(parent);
+Files.createDirectories(parent.toRealPath());

Review comment:
   Thanks for you review, fixed the bug and added related test case.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16036: [FLINK-22689][doc] Table API Documentation Row-Based Operations Examp…

2021-05-31 Thread GitBox


flinkbot commented on pull request #16036:
URL: https://github.com/apache/flink/pull/16036#issuecomment-851746085


   
   ## CI report:
   
   * 484f4fed68bd22ecfe249e4fc909da2a23e0d826 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] paul8263 commented on pull request #16036: [FLINK-22689][doc] Table API Documentation Row-Based Operations Examp…

2021-05-31 Thread GitBox


paul8263 commented on pull request #16036:
URL: https://github.com/apache/flink/pull/16036#issuecomment-851740041


   This issue could be fixed by changing the "as" expression in 
org/apache/flink/table/api/internal/BaseExpressions.java to as "in" 
org/apache/flink/table/api/Table.java. I have tested those samples and now work 
smoothly. But I still feel confused about those two "as" expressions.
   
   I have also tried using select to test the functionality of the as 
expression in org/apache/flink/table/api/internal/BaseExpressions.java with the 
sample code below:
   `Table table = input.select(call("func", $("c")).as("a", "b"));`
   
   The same example code can work well in select while it fails in map or 
flatmap. However, the as clause still fails to expand the result of that 
ScalarFunction to two independent columns.
   
   So I guess the as expression in org/apache/flink/table/api/Table.java can 
both rename and expand fields while that in 
org/apache/flink/table/api/internal/BaseExpressions is only for naeming fields. 
Am I correct?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16036: [FLINK-22689][doc] Table API Documentation Row-Based Operations Examp…

2021-05-31 Thread GitBox


flinkbot commented on pull request #16036:
URL: https://github.com/apache/flink/pull/16036#issuecomment-851739474


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 484f4fed68bd22ecfe249e4fc909da2a23e0d826 (Tue Jun 01 
01:14:15 UTC 2021)
   
   **Warnings:**
* Documentation files were touched, but no `docs/content.zh/` files: Update 
Chinese documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] paul8263 commented on pull request #16036: [FLINK-22689][doc] Table API Documentation Row-Based Operations Examp…

2021-05-31 Thread GitBox


paul8263 commented on pull request #16036:
URL: https://github.com/apache/flink/pull/16036#issuecomment-851739001


   This issue could be fixed by changing the "as" expression in 
org/apache/flink/table/api/internal/BaseExpressions.java to as "in" 
org/apache/flink/table/api/Table.java. I have tested those samples and now work 
smoothly. But I still feel confused about those two "as" expressions.
   
   I have also tried using select to test the functionality of the as 
expression in org/apache/flink/table/api/internal/BaseExpressions.java with the 
sample code below:
   `Table table = input.select(call("func", $("c")).as("a", "b"));`
   
   The same example code can work well in select while it fails in map or 
flatmap. However, the as clause still fails to expand the result of that 
ScalarFunction to two independent columns.
   
   So I guess the as expression in org/apache/flink/table/api/Table.java can 
both rename and expand fields while that in 
org/apache/flink/table/api/internal/BaseExpressions is only for naeming fields. 
Am I correct?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22689) Table API Documentation Row-Based Operations Example Fails

2021-05-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-22689:
---
Labels: pull-request-available  (was: )

> Table API Documentation Row-Based Operations Example Fails
> --
>
> Key: FLINK-22689
> URL: https://issues.apache.org/jira/browse/FLINK-22689
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Yunfeng Zhou
>Assignee: Yao Zhang
>Priority: Major
>  Labels: pull-request-available
>
> I wrote the following program according to the example code provided in 
> [Documentation/Table API/Row-based 
> operations|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/tableapi/#row-based-operations]
> {code:java}
> public class TableUDF {
>     public static void main(String[] args) {
>         StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.createLocalEnvironment();
>         StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
> ​
>         Table input = tEnv.fromValues(
>                 DataTypes.of("ROW"),
>                 Row.of("name")
>         );
> ​
>         ScalarFunction func = new MyMapFunction();
>         tEnv.registerFunction("func", func);
> ​
>         Table table = input
>                 .map(call("func", $("c")).as("a", "b")); // exception occurs 
> here
> ​
>         table.execute().print();
>     }
> ​
>     public static class MyMapFunction extends ScalarFunction {
>         public Row eval(String a) {
>             return Row.of(a, "pre-" + a);
>         }
> ​
>         @Override
>         public TypeInformation getResultType(Class[] signature) {
>             return Types.ROW(Types.STRING, Types.STRING);
>         }
>     }
> }
> {code}
> The code above would throw an exception like this:
> {code}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Only a scalar function can be used in the map operator.
>   at 
> org.apache.flink.table.operations.utils.OperationTreeBuilder.map(OperationTreeBuilder.java:480)
>   at org.apache.flink.table.api.internal.TableImpl.map(TableImpl.java:519)
>   at org.apache.flink.ml.common.function.TableUDFBug.main(TableUDF.java:29)
> {code}
>   The core of the program above is identical to that provided in flink 
> documentation, but it cannot function correctly. This might affect users who 
> want to use custom function with table API.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] paul8263 opened a new pull request #16036: [FLINK-22689][doc] Table API Documentation Row-Based Operations Examp…

2021-05-31 Thread GitBox


paul8263 opened a new pull request #16036:
URL: https://github.com/apache/flink/pull/16036


   …le Fails
   
   ## What is the purpose of the change
   
   Correct error sample codes in chapter map and flatmap in 
docs/content/docs/dev/table/tableApi.md.
   
   ## Brief change log
   
   - docs/content/docs/dev/table/tableApi
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #16023: [FLINK-22698][connectors/rabbitmq] Add deliveryTimeout parameter to RabbitMQ source

2021-05-31 Thread GitBox


SteNicholas commented on a change in pull request #16023:
URL: https://github.com/apache/flink/pull/16023#discussion_r642718447



##
File path: 
flink-connectors/flink-connector-rabbitmq/src/main/java/org/apache/flink/streaming/connectors/rabbitmq/common/RMQConnectionConfig.java
##
@@ -343,6 +367,8 @@ public ConnectionFactory getConnectionFactory()
 // basicQos options for consumers
 private Integer prefetchCount;
 
+private Integer deliveryTimeout;

Review comment:
   IMO, the type of `deliveryTimeout` is `Long` and the time unit is 
default milliseconds.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #16023: [FLINK-22698][connectors/rabbitmq] Add deliveryTimeout parameter to RabbitMQ source

2021-05-31 Thread GitBox


SteNicholas commented on a change in pull request #16023:
URL: https://github.com/apache/flink/pull/16023#discussion_r642718447



##
File path: 
flink-connectors/flink-connector-rabbitmq/src/main/java/org/apache/flink/streaming/connectors/rabbitmq/common/RMQConnectionConfig.java
##
@@ -343,6 +367,8 @@ public ConnectionFactory getConnectionFactory()
 // basicQos options for consumers
 private Integer prefetchCount;
 
+private Integer deliveryTimeout;

Review comment:
   IMO, the type of `deliveryTimeout` is `Long`. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #16023: [FLINK-22698][connectors/rabbitmq] Add deliveryTimeout parameter to RabbitMQ source

2021-05-31 Thread GitBox


SteNicholas commented on a change in pull request #16023:
URL: https://github.com/apache/flink/pull/16023#discussion_r642717539



##
File path: 
flink-connectors/flink-connector-rabbitmq/src/main/java/org/apache/flink/streaming/connectors/rabbitmq/common/RMQConnectionConfig.java
##
@@ -29,21 +29,24 @@
 import java.security.KeyManagementException;
 import java.security.NoSuchAlgorithmException;
 import java.util.Optional;
+import java.util.concurrent.TimeUnit;
 
 /**
  * Connection Configuration for RMQ. If {@link Builder#setUri(String)} has 
been set then {@link
  * RMQConnectionConfig#RMQConnectionConfig(String, Integer, Boolean, Boolean, 
Integer, Integer,
- * Integer, Integer, Integer)} will be used for initialize the RMQ connection 
or {@link
+ * Integer, Integer, Integer, Integer)} will be used for initialize the RMQ 
connection or {@link
  * RMQConnectionConfig#RMQConnectionConfig(String, Integer, String, String, 
String, Integer,
- * Boolean, Boolean, Integer, Integer, Integer, Integer, Integer)} will be 
used for initialize the
- * RMQ connection
+ * Boolean, Boolean, Integer, Integer, Integer, Integer, Integer, Integer)} 
will be used for
+ * initialize the RMQ connection
  */
 public class RMQConnectionConfig implements Serializable {
 
 private static final long serialVersionUID = 1L;
 
 private static final Logger LOG = 
LoggerFactory.getLogger(RMQConnectionConfig.class);
 
+private static final int DEFAULT_DELIVERY_TIMEOUT = 3;

Review comment:
   Thanks for your detailed explanation. I got your point for the delivery 
timeout and makes sense to me.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-22722) Add Documentation for Kafka New Source

2021-05-31 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned FLINK-22722:


Assignee: Qingsheng Ren

> Add Documentation for Kafka New Source
> --
>
> Key: FLINK-22722
> URL: https://issues.apache.org/jira/browse/FLINK-22722
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Documentation describing the usage of Kafka FLIP-27 new source is required in 
> Flink documentations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-22722) Add Documentation for Kafka New Source

2021-05-31 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin resolved FLINK-22722.
--
Resolution: Fixed

Merged to master: 

b582991b8b2b8dadb89e71d5002c4a9cc2055e34

> Add Documentation for Kafka New Source
> --
>
> Key: FLINK-22722
> URL: https://issues.apache.org/jira/browse/FLINK-22722
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Documentation describing the usage of Kafka FLIP-27 new source is required in 
> Flink documentations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin commented on pull request #15974: [FLINK-22722][docs/kafka] Add documentation for Kafka new source

2021-05-31 Thread GitBox


becketqin commented on pull request #15974:
URL: https://github.com/apache/flink/pull/15974#issuecomment-851731167


   Merged to master: b582991b8b2b8dadb89e71d5002c4a9cc2055e34


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin merged pull request #15974: [FLINK-22722][docs/kafka] Add documentation for Kafka new source

2021-05-31 Thread GitBox


becketqin merged pull request #15974:
URL: https://github.com/apache/flink/pull/15974


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin commented on pull request #15974: [FLINK-22722][docs/kafka] Add documentation for Kafka new source

2021-05-31 Thread GitBox


becketqin commented on pull request #15974:
URL: https://github.com/apache/flink/pull/15974#issuecomment-851730432


   Thanks for the patch. LGTM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20392) Migrating bash e2e tests to Java/Docker

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20392:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Migrating bash e2e tests to Java/Docker
> ---
>
> Key: FLINK-20392
> URL: https://issues.apache.org/jira/browse/FLINK-20392
> Project: Flink
>  Issue Type: Improvement
>  Components: Test Infrastructure, Tests
>Reporter: Matthias
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> This Jira issue serves as an umbrella ticket for single e2e test migration 
> tasks. This should enable us to migrate all bash-based e2e tests step-by-step.
> The goal is to utilize the e2e test framework (see 
> [flink-end-to-end-tests-common|https://github.com/apache/flink/tree/master/flink-end-to-end-tests/flink-end-to-end-tests-common]).
>  Ideally, the test should use Docker containers as much as possible 
> disconnect the execution from the environment. A good source to achieve that 
> is [testcontainers.org|https://www.testcontainers.org/].
> The related ML discussion is [Stop adding new bash-based e2e tests to 
> Flink|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Stop-adding-new-bash-based-e2e-tests-to-Flink-td46607.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18895) Not able to access Flink 1.7.2 with HDP 3.0.1 cluster

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18895:
---
Labels: auto-deprioritized-critical stale-major  (was: 
auto-deprioritized-critical)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Not able to access Flink 1.7.2 with HDP 3.0.1 cluster
> -
>
> Key: FLINK-18895
> URL: https://issues.apache.org/jira/browse/FLINK-18895
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.7.2
> Environment: * I am using Ambari Server 2.7.1 with HDP cluster 3.0.1 
> with YARN 3.1.1 and HBase 2.0.0
>  * Using hbase-client-2.0.0.jar along with Flink 1.7.2
>  * 
> [flink-1.7.2-bin-hadoop27-scala_2.11.tgz|https://archive.apache.org/dist/flink/flink-1.7.2/flink-1.7.2-bin-hadoop27-scala_2.11.tgz]
>  using this one for trying. 
>  * please let me know the difference between hadoop27 and hadoop28
>Reporter: Pasha Shaik
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-major
> Attachments: 0874A89C-597E-451A-8986-E619A0E8237B.jpeg, 
> 0ECDB56A-3A76-424F-8926-A9FEB1BD96BB.jpeg, 
> 4237EAEA-D3CD-4791-8322-49E2F9FA8666.png, 
> 667CC1DB-CF0E-44E9-B9E8-1B44151FC00E.jpeg, 
> 87939306-6571-4886-A23B-4780897B88D4.jpeg
>
>
> * I am not able to access Flink 1.7.2 with HDP 3.0.1
>  * The YARN version is 3.1.1 and HBASE is 2.0.0
>  * Flink is successfully getting mounted on Yarn and showing it as RUNNING. 
>  * But in actual when i try to test my code, it is showing below error.
>  * The .tgz which I used is  
> [flink-1.7.2-bin-hadoop27-scala_2.11.tgz|https://archive.apache.org/dist/flink/flink-1.7.2/flink-1.7.2-bin-hadoop27-scala_2.11.tgz]
>  * Please let me know the difference between hadoop27 and hadoop28 and which 
> one to use for my HDP 3.0.1
> -**
>                                           HERE ARE THE LOGS BELOW
> *org.apache.flink.runtime.client.JobExecutionException: Failed to submit job 
> cbb64a9b4e2e3ad0167eb4ceeb53ac87 (Flink Java Job at Tue Aug 11 10:10:47 CEST 
> 2020) at* 
> org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1325)
>  ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:447)
>  ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 
> ~[scala-library-2.11.11.jar:?] at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:38)
>  ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 
> ~[scala-library-2.11.11.jar:?] at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33) 
> ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28) 
> ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) 
> ~[scala-library-2.11.11.jar:?] at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>  ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> akka.actor.Actor$class.aroundReceive(Actor.scala:502) 
> ~[akka-actor_2.11-2.4.20.jar:?] at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:122)
>  ~[flink-runtime_2.11-1.4.2.jar:1.4.2] at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:526) 
> ~[akka-actor_2.11-2.4.20.jar:?] at 
> akka.actor.ActorCell.invoke(ActorCell.scala:495) 
> ~[akka-actor_2.11-2.4.20.jar:?] at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257) 
> ~[akka-actor_2.11-2.4.20.jar:?] at 
> akka.dispatch.Mailbox.run(Mailbox.scala:224) ~[akka-actor_2.11-2.4.20.jar:?] 
> at akka.dispatch.Mailbox.exec(Mailbox.scala:234) 
> ~[akka-actor_2.11-2.4.20.jar:?] at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
> ~[scala-library-2.11.11.jar:?] at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>  ~[scala-library-2.11.11.jar:?] at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
> ~[scala-library-2.11.11.jar:?] at 
> 

[jira] [Updated] (FLINK-18634) FlinkKafkaProducerITCase.testRecoverCommittedTransaction failed with "Timeout expired after 60000milliseconds while awaiting InitProducerId"

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18634:
---
Labels: auto-unassigned stale-major test-stability  (was: auto-unassigned 
test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> FlinkKafkaProducerITCase.testRecoverCommittedTransaction failed with "Timeout 
> expired after 6milliseconds while awaiting InitProducerId"
> 
>
> Key: FLINK-18634
> URL: https://issues.apache.org/jira/browse/FLINK-18634
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0, 1.13.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: auto-unassigned, stale-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4590=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20
> {code}
> 2020-07-17T11:43:47.9693015Z [ERROR] Tests run: 12, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 269.399 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 2020-07-17T11:43:47.9693862Z [ERROR] 
> testRecoverCommittedTransaction(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
>   Time elapsed: 60.679 s  <<< ERROR!
> 2020-07-17T11:43:47.9694737Z org.apache.kafka.common.errors.TimeoutException: 
> org.apache.kafka.common.errors.TimeoutException: Timeout expired after 
> 6milliseconds while awaiting InitProducerId
> 2020-07-17T11:43:47.9695376Z Caused by: 
> org.apache.kafka.common.errors.TimeoutException: Timeout expired after 
> 6milliseconds while awaiting InitProducerId
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18642) Support emitting timed out patterns in MATCH_RECOGNIZE

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18642:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support emitting timed out patterns in MATCH_RECOGNIZE
> --
>
> Key: FLINK-18642
> URL: https://issues.apache.org/jira/browse/FLINK-18642
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / CEP, Table SQL / API
> Environment:  
>  
>Reporter: franklin DREW
>Priority: Major
>  Labels: stale-major
> Attachments: image-2020-07-20-16-35-58-495.png
>
>
> Whether flink SQL CEP supports output matching timeout。 has not been 
> introduced on the official website in several versions. I don't know if it's 
> because of the Calcite version. The statement ONE ROW PER MATCH WITH TIMEOUT 
> ROWS is not supported。
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18610) Clean up Table connector docs grammar

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18610:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Clean up Table connector docs grammar
> -
>
> Key: FLINK-18610
> URL: https://issues.apache.org/jira/browse/FLINK-18610
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Seth Wiesman
>Priority: Major
>  Labels: auto-unassigned, stale-major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18777) Supports schema registry catalog

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18777:
---
Labels: pull-request-available stale-major  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Supports schema registry catalog
> 
>
> Key: FLINK-18777
> URL: https://issues.apache.org/jira/browse/FLINK-18777
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.0
>Reporter: Danny Chen
>Priority: Major
>  Labels: pull-request-available, stale-major
> Fix For: 1.14.0
>
>
> Design doc: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-125%3A+Confluent+Schema+Registry+Catalog



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22443) can not be execute an extreme long sql under batch mode

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22443:
---

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as a 
Blocker but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 1 days. I have gone ahead and marked it "stale-blocker". If this 
ticket is a Blocker, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> can not be execute an extreme long sql under batch mode
> ---
>
> Key: FLINK-22443
> URL: https://issues.apache.org/jira/browse/FLINK-22443
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
> Environment: execute command
>  
> {code:java}
> bin/sql-client.sh embedded -d conf/sql-client-batch.yaml 
> {code}
> content of conf/sql-client-batch.yaml
>  
> {code:java}
> catalogs:
> - name: bnpmphive
>   type: hive
>   hive-conf-dir: /home/gum/hive/conf
>   hive-version: 3.1.2
> execution:
>   planner: blink
>   type: batch
>   #type: streaming
>   result-mode: table
>   parallelism: 4
>   max-parallelism: 2000
>   current-catalog: bnpmphive
>   #current-database: snmpprobe 
> #configuration:
> #  table.sql-dialect: hivemodules:
>- name: core
>  type: core
>- name: myhive
>  type: hivedeployment:
>   # general cluster communication timeout in ms
>   response-timeout: 5000
>   # (optional) address from cluster to gateway
>   gateway-address: ""
>   # (optional) port from cluster to gateway
>   gateway-port: 0
> {code}
>  
>Reporter: macdoor615
>Priority: Blocker
>  Labels: stale-blocker, stale-critical
> Attachments: flink-gum-taskexecutor-8-hb3-prod-hadoop-002.log.4.zip, 
> raw_p_restapi_hcd.csv.zip
>
>
> 1. execute an extreme long sql under batch mode
>  
> {code:java}
> select
> 'CD' product_name,
> r.code business_platform,
> 5 statisticperiod,
> cast('2021-03-24 00:00:00' as timestamp) coltime,
> cast(r1.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_2,
> cast(r2.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_7,
> cast(r3.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_5,
> cast(r4.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_6,
> cast(r5.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00029,
> cast(r6.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00028,
> cast(r7.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00015,
> cast(r8.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00014,
> cast(r9.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00011,
> cast(r10.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00010,
> cast(r11.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00013,
> cast(r12.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00012,
> cast(r13.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00027,
> cast(r14.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00026,
> cast(r15.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00046,
> cast(r16.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00047,
> cast(r17.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00049,
> cast(r18.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00048,
> cast(r19.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00024,
> cast(r20.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00025,
> cast(r21.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00022,
> cast(r22.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00023,
> cast(r23.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00054,
> cast(r24.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00055,
> cast(r25.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00033,
> cast(r26.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00032,
> cast(r27.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00053,
> cast(r28.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00052,
> cast(r29.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00051,
> cast(r30.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00050,
> cast(r31.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00043,
> cast(r32.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00042,
> cast(r33.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00017,
> cast(r34.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00016,
> cast(r35.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_3,
> cast(r36.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00045,
> cast(r37.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00044,
> cast(r38.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00038,
> cast(r39.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00039,
> cast(r40.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00037,
> 

[jira] [Updated] (FLINK-18741) ProcessWindowFunction's process function exception

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18741:
---
Labels: auto-deprioritized-critical stale-major  (was: 
auto-deprioritized-critical)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> ProcessWindowFunction's  process function exception
> ---
>
> Key: FLINK-18741
> URL: https://issues.apache.org/jira/browse/FLINK-18741
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.10.0
>Reporter: mzz
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-major
>
> I use ProcessWindowFunction to achieve PV calculation, but when rewriting 
> process, the user-defined state value cannot be returned。
> code:
> {code:java}
> tem.keyBy(x =>
>   (x._1, x._2, x._4, x._5, x._6, x._7, x._8))
>   .timeWindow(Time.seconds(15 * 60)) //15 min window
>   .process(new ProcessWindowFunction[(String, String, String, String, 
> String, String, String, String, String), CkResult, (String, String, String, 
> String, String, String, String), TimeWindow] {
>   var clickCount: ValueState[Long] = _
> *  var requestCount: ValueState[Long] = _
> *  var returnCount: ValueState[Long] = _
>   var videoCount: ValueState[Long] = _
>   var noVideoCount: ValueState[Long] = _
>   override def open(parameters: Configuration): Unit = {
> clickCount = getRuntimeContext.getState(new 
> ValueStateDescriptor("clickCount", classOf[Long]))
>* requestCount = getRuntimeContext.getState(new 
> ValueStateDescriptor("requestCount", classOf[Long]))*
> returnCount = getRuntimeContext.getState(new 
> ValueStateDescriptor("returnCount", classOf[Long]))
> videoCount = getRuntimeContext.getState(new 
> ValueStateDescriptor("videoCount", classOf[Long]))
> noVideoCount = getRuntimeContext.getState(new 
> ValueStateDescriptor("noVideoCount", classOf[Long]))
>   }
>   override def process(key: (String, String, String, String, String, 
> String, String), context: Context, elements: Iterable[(String, String, 
> String, String, String, String, String, String, String)], out: 
> Collector[CkResult]) = {
> try {
>   var clickNum: Long = clickCount.value
>   val dateNow = 
> LocalDateTime.now().format(DateTimeFormatter.ofPattern("MMdd")).toLong
>   var requestNum: Long = requestCount.value
>   var returnNum: Long = returnCount.value
>   var videoNum: Long = videoCount.value
>   var noVideoNum: Long = noVideoCount.value
>   if (requestNum == null) {
> requestNum = 0
>   }
>   
>   val ecpm = key._7.toDouble.formatted("%.2f").toFloat
>   val created_at = getSecondTimestampTwo(new Date)
>  
> *  elements.foreach(e => {
> if ("adreq".equals(e._3)) {
>   requestNum += 1
>   println(key._1, requestNum)
> }
>   })
>   requestCount.update(requestNum)
>   println(requestNum, key._1)*
>   
>   out.collect(CkResult(dateNow, (created_at - getZero_time) / (60 * 
> 15), key._2, key._3, key._4, key._5, key._3 + "_" + key._4 + "_" + key._5, 
> key._6, key._1, requestCount.value, returnCount.value, fill_rate, 
> noVideoCount.value + videoCount.value,
> expose_rate, clickCount.value, click_rate, ecpm, 
> (noVideoCount.value * ecpm + videoCount.value * ecpm / 
> 1000.toFloat).formatted("%.2f").toFloat, created_at))
> }
> catch {
>   case e: Exception => println(key, e)
> }
>   }
> })
> {code}
> {code:java}
>   elements.foreach(e => {
> if ("adreq".equals(e._3)) {
>   requestNum += 1
>   println(key._1, requestNum)
> // The values printed here like :
> //(key,1)
> //(key,2)
> //(key,3)
> }
>   })
> //But print outside the for loop always like :
> //(key,0)
>   println(requestNum, key._1)
> {code}
> who can help me ,plz thx。



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18810) Golang remote functions SDK

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18810:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Golang remote functions SDK
> ---
>
> Key: FLINK-18810
> URL: https://issues.apache.org/jira/browse/FLINK-18810
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Affects Versions: statefun-3.0.0
>Reporter: Francesco Guardiani
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
>
> Hi,
> I was wondering if there's already some WIP for a Golang SDK to create remote 
> functions. If not, I'm willing to give it a try.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18701) NOT NULL constraint is not guaranteed when aggregation split is enabled

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18701:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> NOT NULL constraint is not guaranteed when aggregation split is enabled
> ---
>
> Key: FLINK-18701
> URL: https://issues.apache.org/jira/browse/FLINK-18701
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.1
>Reporter: Jark Wu
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Take the following test: 
> {{org.apache.flink.table.planner.runtime.stream.sql.SplitAggregateITCase#testMinMaxWithRetraction}}
> {code:scala}
> val t1 = tEnv.sqlQuery(
>   s"""
>  |SELECT
>  |  c, MIN(b), MAX(b), COUNT(DISTINCT a)
>  |FROM(
>  |  SELECT
>  |a, COUNT(DISTINCT b) as b, MAX(b) as c
>  |  FROM T
>  |  GROUP BY a
>  |) GROUP BY c
>""".stripMargin)
> val sink = new TestingRetractSink
> t1.toRetractStream[Row].addSink(sink)
> env.execute()
> println(sink.getRawResults)
> {code}
> The query schema is
> {code:java}
> root
>  |-- c: INT
>  |-- EXPR$1: BIGINT NOT NULL
>  |-- EXPR$2: BIGINT NOT NULL
>  |-- EXPR$3: BIGINT NOT NULL
> {code}
> This should be correct as the count is never null and thus min/max are never 
> null, however, we can receive null in the sink.
> {code}
> List((true,1,null,null,1), (true,2,2,2,1), (false,1,null,null,1), 
> (true,6,2,2,1), (true,5,1,1,0), (false,5,1,1,0), (true,5,1,1,2), 
> (true,4,2,2,0), (false,5,1,1,2), (true,5,1,3,2), (false,4,2,2,0), 
> (false,5,1,3,2), (true,5,1,4,2))
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18648) KafkaITCase.testStartFromGroupOffsets times out on azure

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18648:
---
Labels: auto-deprioritized-critical stale-major test-stability  (was: 
auto-deprioritized-critical test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> KafkaITCase.testStartFromGroupOffsets times out on azure
> 
>
> Key: FLINK-18648
> URL: https://issues.apache.org/jira/browse/FLINK-18648
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4639=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c5f0071e-1851-543e-9a45-9ac140befc32
> {code}
> 2020-07-20T12:16:53.4337483Z [INFO] Tests run: 12, Failures: 0, Errors: 0, 
> Skipped: 0, Time elapsed: 203.504 s - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 2020-07-20T12:16:53.9546587Z [INFO] 
> 2020-07-20T12:16:53.9546890Z [INFO] Results:
> 2020-07-20T12:16:53.9547158Z [INFO] 
> 2020-07-20T12:16:53.9547380Z [ERROR] Errors: 
> 2020-07-20T12:16:53.9548822Z [ERROR]   
> KafkaITCase.testStartFromGroupOffsets:158->KafkaConsumerTestBase.runStartFromGroupOffsets:540->KafkaConsumerTestBase.writeSequence:1992
>  » TestTimedOut
> 2020-07-20T12:16:53.9551978Z [INFO] 
> 2020-07-20T12:16:53.9552734Z [ERROR] Tests run: 80, Failures: 0, Errors: 1, 
> Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18742) Some configuration args do not take effect at client

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18742:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Some configuration args do not take effect at client
> 
>
> Key: FLINK-18742
> URL: https://issues.apache.org/jira/browse/FLINK-18742
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Affects Versions: 1.11.1
>Reporter: Matt Wang
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
> Fix For: 1.14.0
>
>
> Some configuration args from command line will not work at client, for 
> example, the job sets the {color:#505f79}_classloader.resolve-order_{color} 
> to _{color:#505f79}parent-first,{color}_ it can work at TaskManager, but 
> Client doesn't.
> The *FlinkUserCodeClassLoaders* will be created before calling the method of 
> _{color:#505f79}getEffectiveConfiguration(){color}_ at 
> {color:#505f79}org.apache.flink.client.cli.CliFrontend{color}, so the 
> _{color:#505f79}Configuration{color}_ used by 
> _{color:#505f79}PackagedProgram{color}_ does not include Configuration args.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19088) Flink SQL HBaseTableSource Supports FilterPushDown

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-19088:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Flink SQL HBaseTableSource Supports FilterPushDown
> --
>
> Key: FLINK-19088
> URL: https://issues.apache.org/jira/browse/FLINK-19088
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Affects Versions: 1.11.1
> Environment: flink sql 1.11
>Reporter: sijun.huang
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Hi,
> In flink sql 1.11, if we create hbase table via hbase connector through hive 
> catalog, when we query it, the flink will do a full table scan on the hbase 
> table, even we specify the row key filter.
> for detailed info, you may look at below post 
> [http://apache-flink.147419.n8.nabble.com/flink-sql-1-11-hbase-td6652.html]
> so I strongly recommend flink sql 1.11 HbaseTableSource support 
> FilterPushDown to avoid full table scan on hbase table.
> Cheers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18647) How to handle processing time timers with bounded input

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18647:
---
Labels: auto-deprioritized-critical stale-major  (was: 
auto-deprioritized-critical)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> How to handle processing time timers with bounded input
> ---
>
> Key: FLINK-18647
> URL: https://issues.apache.org/jira/browse/FLINK-18647
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Piotr Nowojski
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-major
>
> (most of this description comes from an offline discussion between me, 
> [~AHeise], [~roman_khachatryan], [~aljoscha] and [~sunhaibotb])
> In case of end of input (for example for bounded sources), all pending 
> (untriggered) processing time timers are ignored/dropped. In some cases this 
> is desirable, but for example for {{WindowOperator}} it means that last 
> trailing window will not be triggered, causing an apparent data loss.
> There are a couple of ideas what should be considered.
> 1. Provide a way for users to decide what to do with such timers: cancel, 
> wait, trigger immediately. For example by overloading the existing methods: 
> {{ProcessingTimeService#registerTimer}} and 
> {{ProcessingTimeService#scheduleAtFixedRate}} in the following way:
> {code:java}
> ScheduledFuture registerTimer(long timestamp, ProcessingTimeCallback 
> target, TimerAction timerAction);
> enum TimerAction { 
> CANCEL_ON_END_OF_INPUT, 
> TRIGGER_ON_END_OF_INPUT,
> WAIT_ON_END_OF_INPUT}
> {code}
> or maybe:
> {code}
> public interface TimerAction {
> void onEndOfInput(ScheduledFuture timer);
> }
> {code}
> But this would also mean we store additional state with each timer and we 
> need to modify the serialisation format (providing some kind of state 
> migration path) and potentially increase the size foot print of the timers.
> Extra overhead could have been avoided via some kind of {{Map TimerAction>}}, with lack of entry meaning some default value.
> 2. 
> Also another way to solve this problem might be let the operator code decide 
> what to do with the given timer. Either ask an operator what should happen 
> with given timer (a), or let the operator iterate and cancel the timers on 
> endOfInput() (b), or just fire the timer with some endOfInput flag (c).
> I think none of the (a), (b), and (c) would require braking API changes, no 
> state changes and no additional overheads. Just the logic what to do with the 
> timer would have to be “hardcoded” in the operator’s code. (which btw might 
> even has an additional benefit of being easier to change in case of some 
> bugs, like a timer was registered with wrong/incorrect {{TimerAction}}).
> This is complicated a bit by a question, how (if at all?) options a), b) or 
> c) should be exposed to UDFs? 
> 3. 
> Maybe we need a combination of both? Pre existing operators could implement 
> some custom handling of this issue (via 2a, 2b or 2c), while UDFs could be 
> handled by 1.? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18908) Ensure max parallelism is at least 128 when bootstrapping savepoints

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18908:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Ensure max parallelism is at least 128 when bootstrapping savepoints
> 
>
> Key: FLINK-18908
> URL: https://issues.apache.org/jira/browse/FLINK-18908
> Project: Flink
>  Issue Type: Improvement
>  Components: API / State Processor
>Reporter: Seth Wiesman
>Priority: Major
>  Labels: auto-unassigned, stale-major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18958) Lose column comment when create table

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18958:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Lose column comment when create table
> -
>
> Key: FLINK-18958
> URL: https://issues.apache.org/jira/browse/FLINK-18958
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
> Fix For: 1.14.0
>
>
> Currently, table column will not store column comment and user can't see 
> column comment when use {{describe table}} sql.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22543) layout of exception history tab isn't very usable with Flink SQL

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22543:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> layout of exception history tab isn't very usable with Flink SQL
> 
>
> Key: FLINK-22543
> URL: https://issues.apache.org/jira/browse/FLINK-22543
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.13.0
>Reporter: David Anderson
>Priority: Major
>  Labels: stale-major
> Attachments: image-2021-05-01-12-38-38-178.png
>
>
> With Flink SQL, the name field can be very long, in which case the Time and 
> Exception columns of the Exception History view become very narrow and hard 
> to read.
> Also, the Cancel Job button is covered over with other text.
>  
> !image-2021-05-01-12-38-38-178.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22436) twitter datastream connector hangs, CRLF expected at end of chunk

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22436:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> twitter datastream connector hangs, CRLF expected at end of chunk
> -
>
> Key: FLINK-22436
> URL: https://issues.apache.org/jira/browse/FLINK-22436
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Connectors / Common
> Environment: My environment is just simple local flink env, using the 
> twitter connector example.
>Reporter: Jason Perez
>Priority: Major
>  Labels: stale-major
>
> Sorry for selecting Connectors/Common, Twitter didn't show up in the 
> Connectors / "X" list, I'm not sure why that is.
>  
> in addition to item FLINK-22435 I found this exception as well when the 
> recent version (1.12) twitter connector just hangs. 
>  
> basically I am following the example here:
> [https://github.com/apache/flink/blob/master/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/twitter/TwitterExample.java]
>  
> which just seems to run indefinitely but not have any data throughput/results 
> or exceptions.
>  
> task manager stdout (not logs) looks something like this:
>  
> {code:java}
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.flink.shaded.akka.org.jboss.netty.util.internal.ByteBufferUtilWARNING:
>  Use --illegal-access=warn to enable warnings of further illegal reflective 
> access operationsWARNING: All illegal access operations will be denied in a 
> future releaseMay 01, 2021 8:37:04 AM 
> org.apache.flink.twitter.shaded.com.google.common.io.Closeables closeWARNING: 
> IOException thrown while closing 
> Closeable.org.apache.http.MalformedChunkCodingException: CRLF expected at end 
> of chunkat 
> org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:253)
> at 
> org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:225)
> at 
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:184)  
>   at 
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:213)  
>   at 
> org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:315) 
>    at 
> org.apache.http.conn.BasicManagedEntity.streamClosed(BasicManagedEntity.java:166)
> at 
> org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228)
> at 
> org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:172)
> at 
> java.base/java.util.zip.InflaterInputStream.close(InflaterInputStream.java:231)
> at 
> java.base/java.util.zip.GZIPInputStream.close(GZIPInputStream.java:136)at 
> org.apache.http.client.entity.LazyDecompressingInputStream.close(LazyDecompressingInputStream.java:94)
> at 
> org.apache.flink.twitter.shaded.com.google.common.io.Closeables.close(Closeables.java:77)
> at com.twitter.hbc.httpclient.Connection.close(Connection.java:64)at 
> com.twitter.hbc.httpclient.ClientBase.run(ClientBase.java:148)at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
> at java.base/java.lang.Thread.run(Thread.java:832)
> May 01, 2021 8:39:06 AM 
> org.apache.flink.twitter.shaded.com.google.common.io.Closeables closeWARNING: 
> IOException thrown while closing 
> Closeable.org.apache.http.MalformedChunkCodingException: CRLF expected at end 
> of chunkat 
> org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:253)
> at 
> org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:225)
> at 
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:184)  
>   at 
> org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:213)  
>   at 
> org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:315) 
>    at 
> org.apache.http.conn.BasicManagedEntity.streamClosed(BasicManagedEntity.java:166)
> at 
> org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228)
> at 
> org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:172)
>  

[jira] [Updated] (FLINK-20443) ContinuousProcessingTimeTrigger doesn't fire at the end of the window

2021-05-31 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20443:
---
Labels: pull-request-available stale-minor  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I noticed that neither this issue nor its 
subtasks had updates for 180 days, so I labeled it "stale-minor".  If you are 
still affected by this bug or are still interested in this issue, please update 
and remove the label.


> ContinuousProcessingTimeTrigger doesn't fire at the end of the window
> -
>
> Key: FLINK-20443
> URL: https://issues.apache.org/jira/browse/FLINK-20443
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Gee
>Priority: Minor
>  Labels: pull-request-available, stale-minor
>
>  
> {code:java}
> srcStream
> .timeWindowAll(Time.seconds(60))
> .trigger(ContinuousProcessingTimeTrigger.of(Time.seconds(10)))...
> {code}
>  
>  This can correctly calculate the following interval result : 0-10s 10-20s 
> 20-30s 30-40s 40-50s
> But this lost data which was send in 50-60s.
> Because when the first window ends, the time is 59.s, it is not equal to 
> window-end-time(60s).So it will not enter the if judgment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   >