[GitHub] [flink] WeiZhong94 commented on pull request #13273: [FLINK-18801][docs][python] Add a "10 minutes to Table API" document under the "Python API" -> "User Guide" -> "Table API" section.

2020-08-31 Thread GitBox


WeiZhong94 commented on pull request #13273:
URL: https://github.com/apache/flink/pull/13273#issuecomment-684389492


   @dianfu Thanks for your comments. I have updated this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19091) Introduce expression DSL for Python Table API

2020-08-31 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19091:

Issue Type: New Feature  (was: Task)

> Introduce expression DSL for Python Table API
> -
>
> Key: FLINK-19091
> URL: https://issues.apache.org/jira/browse/FLINK-19091
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Java expression DSL has been introduced in 
> [FLIP-55|https://cwiki.apache.org/confluence/display/FLINK/FLIP-55%3A+Introduction+of+a+Table+API+Java+Expression+DSL]
>  for the Java Table API. This feature is very useful for users. The aim of 
> this JIRA is to support expression DSL in the Python Table API to align with 
> the Java Table API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19091) Introduce expression DSL for Python Table API

2020-08-31 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19091:

Issue Type: Task  (was: New Feature)

> Introduce expression DSL for Python Table API
> -
>
> Key: FLINK-19091
> URL: https://issues.apache.org/jira/browse/FLINK-19091
> Project: Flink
>  Issue Type: Task
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Java expression DSL has been introduced in 
> [FLIP-55|https://cwiki.apache.org/confluence/display/FLINK/FLIP-55%3A+Introduction+of+a+Table+API+Java+Expression+DSL]
>  for the Java Table API. This feature is very useful for users. The aim of 
> this JIRA is to support expression DSL in the Python Table API to align with 
> the Java Table API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19091) Introduce expression DSL for Python Table API

2020-08-31 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19091:

Issue Type: New Feature  (was: Bug)

> Introduce expression DSL for Python Table API
> -
>
> Key: FLINK-19091
> URL: https://issues.apache.org/jira/browse/FLINK-19091
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Java expression DSL has been introduced in 
> [FLIP-55|https://cwiki.apache.org/confluence/display/FLINK/FLIP-55%3A+Introduction+of+a+Table+API+Java+Expression+DSL]
>  for the Java Table API. This feature is very useful for users. The aim of 
> this JIRA is to support expression DSL in the Python Table API to align with 
> the Java Table API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13289: [FLINK-18548][table-planner] support flexible syntax for Temporal table join

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13289:
URL: https://github.com/apache/flink/pull/13289#issuecomment-683894616


   
   ## CI report:
   
   * 78e195f2120b88bff90da918c1a5a0ae5327285f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6029)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on a change in pull request #13273: [FLINK-18801][docs][python] Add a "10 minutes to Table API" document under the "Python API" -> "User Guide" -> "Table API" section

2020-08-31 Thread GitBox


dianfu commented on a change in pull request #13273:
URL: https://github.com/apache/flink/pull/13273#discussion_r480834034



##
File path: docs/dev/python/user-guide/table/10_minutes_to_table_api.zh.md
##
@@ -0,0 +1,739 @@
+---
+title: "10分钟入门Table API"

Review comment:
   ```suggestion
   title: "10分钟入门Python Table API"
   ```

##
File path: docs/dev/python/user-guide/table/10_minutes_to_table_api.md
##
@@ -0,0 +1,739 @@
+---
+title: "10 Minutes to Table API"
+nav-parent_id: python_tableapi
+nav-pos: 25
+---
+
+
+This document is a short introduction to PyFlink Table API, which is used to 
help novice users quickly understand the basic usage of PyFlink Table API.
+For advanced usage, please refer to other documents in this User Guide.
+
+* This will be replaced by the TOC
+{:toc}
+
+Common Structure of Python Table API Program 
+
+
+All Table API and SQL programs, both batch and streaming, follow the same 
pattern. The following code example shows the common structure of Table API and 
SQL programs.
+
+{% highlight python %}
+
+from pyflink.table import EnvironmentSettings, StreamTableEnvironment
+
+# 1. create a TableEnvironment
+env_settings = 
EnvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build()
+table_env = StreamTableEnvironment.create(environment_settings=env_settings)
+
+# 2. create source Table
+table_env.execute_sql("""
+CREATE TABLE datagen (
+id INT,
+data STRING
+) WITH (
+'connector' = 'datagen',
+'fields.id.kind' = 'sequence',
+'fields.id.start' = '1',
+'fields.id.end' = '10'
+)
+""")
+
+# 3. create sink Table
+table_env.execute_sql("""
+CREATE TABLE print (
+id INT,
+data STRING
+) WITH (
+'connector' = 'print'
+)
+""")
+
+# 4. query from source table and perform caculations
+# create a Table from a Table API query:
+source_table = table_env.from_path("datagen")
+# or create a Table from a SQL query:
+source_table = table_env.sql_query("SELECT * FROM datagen")
+
+result_table = source_table.select("id + 1, data")
+
+# 5. emit query result to sink table
+# emit a Table API result Table to a sink table:
+result_table.execute_insert("print").get_job_client().get_job_execution_result().result()
+# or emit results via SQL query:
+table_env.execute_sql("INSERT INTO print SELECT * FROM 
datagen").get_job_client().get_job_execution_result().result()
+
+{% endhighlight %}
+
+{% top %}
+
+Create a TableEnvironment
+---
+
+The `TableEnvironment` is a central concept of the Table API and SQL 
integration. The following code example shows how to create a TableEnvironment:
+
+{% highlight python %}
+
+from pyflink.table import EnvironmentSettings, StreamTableEnvironment, 
BatchTableEnvironment
+
+# create a blink streaming TableEnvironment
+env_settings = 
EnvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build()
+table_env = StreamTableEnvironment.create(environment_settings=env_settings)
+
+# create a blink batch TableEnvironment
+env_settings = 
EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
+table_env = BatchTableEnvironment.create(environment_settings=env_settings)
+
+# create a flink streaming TableEnvironment
+env_settings = 
EnvironmentSettings.new_instance().in_streaming_mode().use_old_planner().build()
+table_env = StreamTableEnvironment.create(environment_settings=env_settings)
+
+# create a flink batch TableEnvironment
+env_settings = 
EnvironmentSettings.new_instance().in_batch_mode().use_old_planner().build()
+table_env = BatchTableEnvironment.create(environment_settings=env_settings)
+
+{% endhighlight %}
+
+The `TableEnvironment` is responsible for:
+
+* Creating `Table`s
+* Registering `Table`s as a temporary view
+* Executing SQL queries, see [SQL]({% link dev/table/sql/index.md %}) for more 
details
+* Registering user-defined (scalar, table, or aggregation) functions, see 
[General User-defined Functions]({% link 
dev/python/user-guide/table/udfs/python_udfs.md %}) and [Vectorized 
User-defined Functions]({% link 
dev/python/user-guide/table/udfs/vectorized_python_udfs.md %}) for more details
+* Configuring the job, see [Python Configuration]({% link 
dev/python/user-guide/table/python_config.md %}) for more details
+* Managing Python dependencies, see [Dependency Management]({% link 
dev/python/user-guide/table/dependency_management.md %}) for more details
+* Submitting the jobs for execution
+
+Currently there are 2 planners available: flink planner and blink planner.
+
+You should explicitly set which planner to use in the current program.
+We recommend using the blink planner as much as possible. 
+
+{% top %}
+
+Create Tables
+---
+
+`Table` is a core component of the Python Table API. A `Table` is a logical 
representation of the intermediate result of a Table API Job.
+
+A `Table` 

[jira] [Closed] (FLINK-18824) Support serialization for canal-json format

2020-08-31 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-18824.
---
Fix Version/s: 1.12.0
   Resolution: Fixed

Implemented in master (1.12.0): 853e1906c3fe3a3931c650c5ca6965a0d460240e

> Support serialization for canal-json format
> ---
>
> Key: FLINK-18824
> URL: https://issues.apache.org/jira/browse/FLINK-18824
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: CaoZhen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Currently, canal-json format only support deserialization, but not support 
> serialization. This is not convenient for users to writing changelogs to an 
> message queue. 
> The serialization for canal-json can follow the json strcuture of Canal, but 
> considering currently Flink can't combine UPDATE_BEFORE and UPDATE_AFTER into 
> a single UPDATE message. We can encode UPDATE_BEFORE and UDPATE_AFTER as 
> DELETE and INSERT canal messages. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #13122: [FLINK-18824][format][canal] Support serialization for canal-json format

2020-08-31 Thread GitBox


wuchong commented on pull request #13122:
URL: https://github.com/apache/flink/pull/13122#issuecomment-684316789


   Merging...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong merged pull request #13122: [FLINK-18824][format][canal] Support serialization for canal-json format

2020-08-31 Thread GitBox


wuchong merged pull request #13122:
URL: https://github.com/apache/flink/pull/13122


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19106) Add more timeout options for remote function specs

2020-08-31 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-19106:
---

 Summary: Add more timeout options for remote function specs
 Key: FLINK-19106
 URL: https://issues.apache.org/jira/browse/FLINK-19106
 Project: Flink
  Issue Type: Task
  Components: Stateful Functions
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai
 Fix For: statefun-2.2.0


As of now, we only support setting the call timeout for remote functions, which 
spans a complete call including connection, writing request, server-side 
processing, and reading response times.

To allow more fine-grained control of this, we propose to introduce 
configuration keys for {{connectTimeout}} / {{readTimeout}} / {{writeTimeout}} 
to remote function specs.
By default, these values should be 10 to be coherent with the current behaviour.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13003: [FLINK-18737][docs]translate jdbc connector

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13003:
URL: https://github.com/apache/flink/pull/13003#issuecomment-664844064


   
   ## CI report:
   
   * 565d353b41557312917ef867210bb731dd972fe7 UNKNOWN
   * 7892181dbf2f1c3146cec80956cf88f2dff39957 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6033)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19095) Add expire mode for remote function state TTL

2020-08-31 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-19095.
---
Resolution: Fixed

Fixed in statefun-master via 9a76143a10f7937abd2e166bf30c20f87f8d2029

> Add expire mode for remote function state TTL
> -
>
> Key: FLINK-19095
> URL: https://issues.apache.org/jira/browse/FLINK-19095
> Project: Flink
>  Issue Type: Task
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>  Labels: pull-request-available
> Fix For: statefun-2.2.0
>
>
> We did not allow setting expire mode for each remote function state before 
> due to FLINK-17954. Now that remote function state is de-multiplexed, we can 
> now easily support this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] tzulitai closed pull request #135: [FLINK-19095] [remote] Allow state expiration mode for remote function state

2020-08-31 Thread GitBox


tzulitai closed pull request #135:
URL: https://github.com/apache/flink-statefun/pull/135


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] tzulitai commented on pull request #131: [FLINK-18968] Translate README.md to Chinese

2020-08-31 Thread GitBox


tzulitai commented on pull request #131:
URL: https://github.com/apache/flink-statefun/pull/131#issuecomment-684223257


   Just to make it a bit more clearer:
   I think before we merge this, it is best if we can extend the [Flink 
translation 
specifications](https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications)
 for Stateful Functions first.
   That would set a better foundation for future contributions on further 
translating the documentations.
   
   @billyrrr do you think that would be possible? Or do you prefer merging this 
first, and work on the specifications afterwards?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13278: [FLINK-19091][python] Introduce expression DSL for Python Table API

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13278:
URL: https://github.com/apache/flink/pull/13278#issuecomment-683294857


   
   ## CI report:
   
   * ca0216692b0b058bd5cbe6a1a6b3f345feba3def UNKNOWN
   * 936c094040301f3eb23058dfd3fa24fc5f89cd6a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6014)
 
   * 23c61fffdf54d684e8346d7509846170f7151053 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6036)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13271: [FLINK-19043][docs-zh] Translate the 'Logging' page of 'Debugging & Monitoring' into Chinese

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13271:
URL: https://github.com/apache/flink/pull/13271#issuecomment-682323547


   
   ## CI report:
   
   * fa03e5ad55dd9fc51308ff2df1db31e872f4ab7c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6031)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13291: [FLINK-18988][table] Continuous query with LATERAL and LIMIT produces…

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13291:
URL: https://github.com/apache/flink/pull/13291#issuecomment-684180034


   
   ## CI report:
   
   * 8406cd55c09be8ad125f574be5e58a91ad717f38 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6034)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13284: [FLINK-17016][runtime] Change blink planner batch jobs to run with pipelined region scheduling

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13284:
URL: https://github.com/apache/flink/pull/13284#issuecomment-683683000


   
   ## CI report:
   
   * 6e68f6bd327d805261acdc9005a9cfc099f595ae Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6035)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6011)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-benchmarks] zhijiangW commented on a change in pull request #3: [FLINK-18905] Provide basic benchmarks for MultipleInputStreamOperator

2020-08-31 Thread GitBox


zhijiangW commented on a change in pull request #3:
URL: https://github.com/apache/flink-benchmarks/pull/3#discussion_r480755236



##
File path: src/main/java/org/apache/flink/benchmark/MultipleInputBenchmark.java
##
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.benchmark;
+
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.benchmark.functions.LongSource;
+import org.apache.flink.benchmark.functions.QueuingLongSource;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.datastream.MultipleConnectedStreams;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.DiscardingSink;
+import org.apache.flink.streaming.api.operators.AbstractInput;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperatorFactory;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperatorV2;
+import org.apache.flink.streaming.api.operators.Input;
+import org.apache.flink.streaming.api.operators.MultipleInputStreamOperator;
+import org.apache.flink.streaming.api.operators.StreamOperator;
+import org.apache.flink.streaming.api.operators.StreamOperatorParameters;
+import 
org.apache.flink.streaming.api.transformations.MultipleInputTransformation;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.OperationsPerInvocation;
+import org.openjdk.jmh.runner.Runner;
+import org.openjdk.jmh.runner.RunnerException;
+import org.openjdk.jmh.runner.options.Options;
+import org.openjdk.jmh.runner.options.OptionsBuilder;
+import org.openjdk.jmh.runner.options.VerboseMode;
+
+import java.util.Arrays;
+import java.util.List;
+
+public class MultipleInputBenchmark extends BenchmarkBase {
+
+   public static final int RECORDS_PER_INVOCATION = 
TwoInputBenchmark.RECORDS_PER_INVOCATION;

Review comment:
   That is already fine for me.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13284: [FLINK-17016][runtime] Change blink planner batch jobs to run with pipelined region scheduling

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13284:
URL: https://github.com/apache/flink/pull/13284#issuecomment-683683000


   
   ## CI report:
   
   * 6e68f6bd327d805261acdc9005a9cfc099f595ae Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6011)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6035)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13291: [FLINK-18988][table] Continuous query with LATERAL and LIMIT produces…

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13291:
URL: https://github.com/apache/flink/pull/13291#issuecomment-684180034


   
   ## CI report:
   
   * 8406cd55c09be8ad125f574be5e58a91ad717f38 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6034)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13273: [FLINK-18801][docs][python] Add a "10 minutes to Table API" document under the "Python API" -> "User Guide" -> "Table API" section.

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13273:
URL: https://github.com/apache/flink/pull/13273#issuecomment-682353427


   
   ## CI report:
   
   * f71ac48908f7dbf2f10a9145262b8438d1f051c4 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6030)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13003: [FLINK-18737][docs]translate jdbc connector

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13003:
URL: https://github.com/apache/flink/pull/13003#issuecomment-664844064


   
   ## CI report:
   
   * dc06d534079644bbf17c8b2c29e73b7c0674a8c7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4956)
 
   * 565d353b41557312917ef867210bb731dd972fe7 UNKNOWN
   * 7892181dbf2f1c3146cec80956cf88f2dff39957 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6033)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on pull request #13284: [FLINK-17016][runtime] Change blink planner batch jobs to run with pipelined region scheduling

2020-08-31 Thread GitBox


zhuzhurk commented on pull request #13284:
URL: https://github.com/apache/flink/pull/13284#issuecomment-684182714


   The failed e2e test "Elasticsearch (v6.3.1) sink" is a known issue 
FLINK-19093 which is unrelated to this change.
   Re-run the tests.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on pull request #13284: [FLINK-17016][runtime] Change blink planner batch jobs to run with pipelined region scheduling

2020-08-31 Thread GitBox


zhuzhurk commented on pull request #13284:
URL: https://github.com/apache/flink/pull/13284#issuecomment-684182790


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19093) "Elasticsearch (v6.3.1) sink end-to-end test" failed with "SubtaskCheckpointCoordinatorImpl was closed without closing asyncCheckpointRunnable 1"

2020-08-31 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17188125#comment-17188125
 ] 

Zhu Zhu commented on FLINK-19093:
-

Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6011=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=13722

> "Elasticsearch (v6.3.1) sink end-to-end test" failed with 
> "SubtaskCheckpointCoordinatorImpl was closed without closing 
> asyncCheckpointRunnable 1"
> -
>
> Key: FLINK-19093
> URL: https://issues.apache.org/jira/browse/FLINK-19093
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=5986=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=3425d8ba-5f03-540a-c64b-51b8481bf7d6
> {code}
> 2020-08-29T22:20:02.3500263Z 2020-08-29 22:20:00,851 INFO  
> org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable [] - Source: 
> Sequence Source -> Flat Map -> Sink: Unnamed (1/1) - asynchronous part of 
> checkpoint 1 could not be completed.
> 2020-08-29T22:20:02.3501112Z java.lang.IllegalStateException: 
> SubtaskCheckpointCoordinatorImpl was closed without closing 
> asyncCheckpointRunnable 1
> 2020-08-29T22:20:02.3502049Z  at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:217) 
> ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-08-29T22:20:02.3503280Z  at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.registerAsyncCheckpointRunnable(SubtaskCheckpointCoordinatorImpl.java:371)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-08-29T22:20:02.3504647Z  at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.lambda$registerConsumer$2(SubtaskCheckpointCoordinatorImpl.java:479)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-08-29T22:20:02.3505882Z  at 
> org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.run(AsyncCheckpointRunnable.java:95)
>  [flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-08-29T22:20:02.3506614Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_265]
> 2020-08-29T22:20:02.3507203Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_265]
> 2020-08-29T22:20:02.3507685Z  at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_265]
> 2020-08-29T22:20:02.3509577Z 2020-08-29 22:20:00,927 INFO  
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Free slot 
> TaskSlot(index:0, state:ACTIVE, resource profile: 
> ResourceProfile{cpuCores=1., taskHeapMemory=384.000mb 
> (402653174 bytes), taskOffHeapMemory=0 bytes, managedMemory=512.000mb 
> (536870920 bytes), networkMemory=128.000mb (134217730 bytes)}, allocationId: 
> ca890bc4df19c66146370647d07bf510, jobId: 3522a3e4940d4b3cefc6dc1f22123f4b).
> 2020-08-29T22:20:02.3511425Z 2020-08-29 22:20:00,939 INFO  
> org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Remove job 
> 3522a3e4940d4b3cefc6dc1f22123f4b from job leader monitoring.
> 2020-08-29T22:20:02.3512499Z 2020-08-29 22:20:00,939 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutor   [] - Close 
> JobManager connection for job 3522a3e4940d4b3cefc6dc1f22123f4b.
> 2020-08-29T22:20:02.3513174Z Checking for non-empty .out files...
> 2020-08-29T22:20:02.3513706Z No non-empty .out files.
> 2020-08-29T22:20:02.3513878Z 
> 2020-08-29T22:20:02.3514679Z [FAIL] 'Elasticsearch (v6.3.1) sink end-to-end 
> test' failed after 0 minutes and 37 seconds! Test exited with exit code 0 but 
> the logs contained errors, exceptions or non-empty .out files
> 2020-08-29T22:20:02.3515138Z 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13291: [FLINK-18988][table] Continuous query with LATERAL and LIMIT produces…

2020-08-31 Thread GitBox


flinkbot commented on pull request #13291:
URL: https://github.com/apache/flink/pull/13291#issuecomment-684180034


   
   ## CI report:
   
   * 8406cd55c09be8ad125f574be5e58a91ad717f38 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13278: [FLINK-19091][python] Introduce expression DSL for Python Table API

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13278:
URL: https://github.com/apache/flink/pull/13278#issuecomment-683294857


   
   ## CI report:
   
   * ca0216692b0b058bd5cbe6a1a6b3f345feba3def UNKNOWN
   * 936c094040301f3eb23058dfd3fa24fc5f89cd6a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6014)
 
   * 23c61fffdf54d684e8346d7509846170f7151053 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13209: [FLINK-18832][datastream] Add compatible check for blocking partition with buffer timeout

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13209:
URL: https://github.com/apache/flink/pull/13209#issuecomment-677744672


   
   ## CI report:
   
   * a343c2c3bf36c97dca7045c65eccbcccfbbef5bf UNKNOWN
   * ec3be2e6f417e344583e984579a6734825750546 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6003)
 
   * 895da41424f7b688b26c469b61e3d024b0e325ed Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6032)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13003: [FLINK-18737][docs]translate jdbc connector

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13003:
URL: https://github.com/apache/flink/pull/13003#issuecomment-664844064


   
   ## CI report:
   
   * dc06d534079644bbf17c8b2c29e73b7c0674a8c7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4956)
 
   * 565d353b41557312917ef867210bb731dd972fe7 UNKNOWN
   * 7892181dbf2f1c3146cec80956cf88f2dff39957 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13291: [FLINK-18988][table] Continuous query with LATERAL and LIMIT produces…

2020-08-31 Thread GitBox


flinkbot commented on pull request #13291:
URL: https://github.com/apache/flink/pull/13291#issuecomment-684175977


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 8406cd55c09be8ad125f574be5e58a91ad717f38 (Tue Sep 01 
03:33:51 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 opened a new pull request #13291: [FLINK-18988][table] Continuous query with LATERAL and LIMIT produces…

2020-08-31 Thread GitBox


danny0405 opened a new pull request #13291:
URL: https://github.com/apache/flink/pull/13291


   … wrong result
   
   The batch mode rank only supports RANK function, so we only rewrite the
   stream mode query.
   
   ## What is the purpose of the change
   
   Fix the query of pattern:
   ```sql
   SELECT state, name
   FROM
 (SELECT DISTINCT state FROM cities) states,
 LATERAL (
   SELECT name, pop
   FROM cities
   WHERE state = states.state
   ORDER BY pop
   DESC LIMIT 3
 )
   ```
   Before the patch, the query generates a wrong plan then wrong results.
   
   ## Brief change log
   
 - Add a new rule `CorrelateSortToRankRule` for the rewrite
 - Add plan test and IT test
   
   ## Verifying this change
   
   Added tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not documented
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13209: [FLINK-18832][datastream] Add compatible check for blocking partition with buffer timeout

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13209:
URL: https://github.com/apache/flink/pull/13209#issuecomment-677744672


   
   ## CI report:
   
   * a343c2c3bf36c97dca7045c65eccbcccfbbef5bf UNKNOWN
   * ec3be2e6f417e344583e984579a6734825750546 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6003)
 
   * 895da41424f7b688b26c469b61e3d024b0e325ed UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shuiqiangchen commented on a change in pull request #13230: [FLINK-18950][python][docs] Add documentation for Operations in Pytho…

2020-08-31 Thread GitBox


shuiqiangchen commented on a change in pull request #13230:
URL: https://github.com/apache/flink/pull/13230#discussion_r480679306



##
File path: docs/dev/python/user-guide/datastream/operations.md
##
@@ -0,0 +1,87 @@
+---

Review comment:
   Thank you, I will rename it to align with the title.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19054) KafkaTableITCase.testKafkaSourceSink hangs

2020-08-31 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17188111#comment-17188111
 ] 

Zhijiang commented on FLINK-19054:
--

Anther instance 
https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/6003/logs/111

> KafkaTableITCase.testKafkaSourceSink hangs
> --
>
> Key: FLINK-19054
> URL: https://issues.apache.org/jira/browse/FLINK-19054
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.11.2
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=5844=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20]
> {code}
> 2020-08-25T09:04:57.3569768Z "Kafka Fetcher for Source: 
> KafkaTableSource(price, currency, log_date, log_time, log_ts) -> 
> SourceConversion(table=[default_catalog.default_database.kafka, source: 
> [KafkaTableSource(price, currency, log_date, log_time, log_ts)]], 
> fields=[price, currency, log_date, log_time, log_ts]) -> Calc(select=[(price 
> + 1.0:DECIMAL(2, 1)) AS computed-price, price, currency, log_date, log_time, 
> log_ts, (log_ts + 1000:INTERVAL SECOND) AS ts]) -> 
> WatermarkAssigner(rowtime=[ts], watermark=[ts]) -> Calc(select=[ts, log_date, 
> log_time, CAST(ts) AS ts0, price]) (1/1)" #1501 daemon prio=5 os_prio=0 
> tid=0x7f25b800 nid=0x22b8 runnable [0x7f2127efd000]
> 2020-08-25T09:04:57.3571373Zjava.lang.Thread.State: RUNNABLE
> 2020-08-25T09:04:57.3571672Z  at sun.nio.ch.FileDispatcherImpl.read0(Native 
> Method)
> 2020-08-25T09:04:57.3572191Z  at 
> sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> 2020-08-25T09:04:57.3572921Z  at 
> sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> 2020-08-25T09:04:57.3573419Z  at sun.nio.ch.IOUtil.read(IOUtil.java:197)
> 2020-08-25T09:04:57.3573957Z  at 
> sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:377)
> 2020-08-25T09:04:57.3574809Z  - locked <0xfde5a308> (a 
> java.lang.Object)
> 2020-08-25T09:04:57.3575448Z  at 
> org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:103)
> 2020-08-25T09:04:57.3576309Z  at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:117)
> 2020-08-25T09:04:57.3577086Z  at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
> 2020-08-25T09:04:57.3577727Z  at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
> 2020-08-25T09:04:57.3578403Z  at 
> org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
> 2020-08-25T09:04:57.3579486Z  at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
> 2020-08-25T09:04:57.3580240Z  at 
> org.apache.kafka.common.network.Selector.poll(Selector.java:483)
> 2020-08-25T09:04:57.3580880Z  at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:547)
> 2020-08-25T09:04:57.3581756Z  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
> 2020-08-25T09:04:57.3583015Z  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
> 2020-08-25T09:04:57.3583847Z  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1300)
> 2020-08-25T09:04:57.3584555Z  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1240)
> 2020-08-25T09:04:57.3585197Z  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1168)
> 2020-08-25T09:04:57.3585961Z  at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:253)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13271: [FLINK-19043][docs-zh] Translate the 'Logging' page of 'Debugging & Monitoring' into Chinese

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13271:
URL: https://github.com/apache/flink/pull/13271#issuecomment-682323547


   
   ## CI report:
   
   * 64a6d4e0e88b1b73b86a00a0dc7ef7e59e538967 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6022)
 
   * fa03e5ad55dd9fc51308ff2df1db31e872f4ab7c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6031)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19105) Table API Sample Code Error

2020-08-31 Thread weizheng (Jira)
weizheng created FLINK-19105:


 Summary: Table API Sample Code Error
 Key: FLINK-19105
 URL: https://issues.apache.org/jira/browse/FLINK-19105
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.11.1
Reporter: weizheng
 Fix For: 1.12.0


IN

[https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/common.html]

Sample code of Emit a Table module is wrong.

The code 

tableEnv.connect(new FileSystem("/path/to/file"))

should be

tableEnv.connect(new FileSystem().path("/path/to/file"))



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo commented on a change in pull request #13230: [FLINK-18950][python][docs] Add documentation for Operations in Pytho…

2020-08-31 Thread GitBox


HuangXingBo commented on a change in pull request #13230:
URL: https://github.com/apache/flink/pull/13230#discussion_r480232415



##
File path: docs/dev/python/user-guide/datastream/operations.md
##
@@ -0,0 +1,87 @@
+---

Review comment:
   Rename the file to operators.md ?

##
File path: docs/dev/python/user-guide/datastream/operations.md
##
@@ -0,0 +1,87 @@
+---
+title: "Operators"
+nav-parent_id: python_datastream_api
+nav-pos: 20
+---
+
+
+
+Operators transform one or more DataStreams into a new DataStream. Programs 
can combine multiple transformations into 
+sophisticated dataflow topologies.
+
+This section give a description of the basic transformations Python DataStream 
API provides, the effective physical 
+partitioning after applying those as well as insights into Flink's operator 
chaining.
+
+* This will be replaced by the TOC
+{:toc}
+
+# DataStream Transformations
+
+DataStream programs in Flink are regular programs that implement 
transformations on data streams (e.g., mapping, 
+filtering, reducing). Please see [operators]({% link 
dev/stream/operators/index.md %}
+?code_tab=python) for an overview of the available stream transformations in 
Python DataStream API.
+
+# Functions
+Most operators require a user-defined function. The following will describe 
different ways of how they can be specified.
+
+## Implementing Function Interfaces
+Function interfaces for different operations are provided in Python DataStream 
API. Users can implement a Function and 
+pass it to the corresponding operation. Take MapFunction for instance:
+
+{% highlight python %}
+# Implement a MapFunction that return plus one value of input value.

Review comment:
   return -> returns

##
File path: docs/dev/python/user-guide/datastream/operations.zh.md
##
@@ -0,0 +1,77 @@
+---

Review comment:
   Rename the file to operators.zh.md ?

##
File path: docs/dev/python/user-guide/datastream/operations.md
##
@@ -0,0 +1,87 @@
+---
+title: "Operators"
+nav-parent_id: python_datastream_api
+nav-pos: 20
+---
+
+
+
+Operators transform one or more DataStreams into a new DataStream. Programs 
can combine multiple transformations into 
+sophisticated dataflow topologies.
+
+This section give a description of the basic transformations Python DataStream 
API provides, the effective physical 
+partitioning after applying those as well as insights into Flink's operator 
chaining.
+
+* This will be replaced by the TOC
+{:toc}
+
+# DataStream Transformations
+
+DataStream programs in Flink are regular programs that implement 
transformations on data streams (e.g., mapping, 
+filtering, reducing). Please see [operators]({% link 
dev/stream/operators/index.md %}
+?code_tab=python) for an overview of the available stream transformations in 
Python DataStream API.
+
+# Functions
+Most operators require a user-defined function. The following will describe 
different ways of how they can be specified.
+
+## Implementing Function Interfaces
+Function interfaces for different operations are provided in Python DataStream 
API. Users can implement a Function and 
+pass it to the corresponding operation. Take MapFunction for instance:
+
+{% highlight python %}
+# Implement a MapFunction that return plus one value of input value.
+class MyMapFunction(MapFunction):
+
+def map(value):
+return value + 1
+
+data_stream = env.from_collection([1, 2, 3, 4, 5],type_info=Types.INT())
+mapped_stream = data_stream.map(MyMapFunction(), output_type=Types.INT())
+{% endhighlight %}
+
+Note In Python DataStream API, users are 
able to defined the output type information of the operation. If not 
+defined, the output type will be `Types.PICKLED_BYTE_ARRAY` that data will be 
in a form of byte array generated by 

Review comment:
   ```suggestion
   defined, the output type will be `Types.PICKLED_BYTE_ARRAY` so that data 
will be in a form of byte array generated by 
   ```

##
File path: docs/dev/python/user-guide/datastream/operations.md
##
@@ -0,0 +1,87 @@
+---
+title: "Operators"
+nav-parent_id: python_datastream_api
+nav-pos: 20
+---
+
+
+
+Operators transform one or more DataStreams into a new DataStream. Programs 
can combine multiple transformations into 
+sophisticated dataflow topologies.
+
+This section give a description of the basic transformations Python DataStream 
API provides, the effective physical 
+partitioning after applying those as well as insights into Flink's operator 
chaining.
+
+* This will be replaced by the TOC
+{:toc}
+
+# DataStream Transformations
+
+DataStream programs in Flink are regular programs that implement 
transformations on data streams (e.g., mapping, 
+filtering, reducing). Please see [operators]({% link 
dev/stream/operators/index.md %}
+?code_tab=python) for an overview of the available stream transformations in 
Python DataStream API.
+
+# Functions
+Most operators require a user-defined 

[GitHub] [flink] flinkbot edited a comment on pull request #13273: [FLINK-18801][docs][python] Add a "10 minutes to Table API" document under the "Python API" -> "User Guide" -> "Table API" section.

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13273:
URL: https://github.com/apache/flink/pull/13273#issuecomment-682353427


   
   ## CI report:
   
   * a7e0b78ef82e5634f75a865cd0e705bfa795f901 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6012)
 
   * f71ac48908f7dbf2f10a9145262b8438d1f051c4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6030)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13271: [FLINK-19043][docs-zh] Translate the 'Logging' page of 'Debugging & Monitoring' into Chinese

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13271:
URL: https://github.com/apache/flink/pull/13271#issuecomment-682323547


   
   ## CI report:
   
   * 64a6d4e0e88b1b73b86a00a0dc7ef7e59e538967 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6022)
 
   * fa03e5ad55dd9fc51308ff2df1db31e872f4ab7c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18915) FIXED_PATH(dummy Hadoop Path) with WriterImpl may cause ORC writer OOM

2020-08-31 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17188106#comment-17188106
 ] 

Jingsong Lee commented on FLINK-18915:
--

CC: [~gaoyunhaii]

> FIXED_PATH(dummy Hadoop Path) with WriterImpl may cause ORC writer OOM
> --
>
> Key: FLINK-18915
> URL: https://issues.apache.org/jira/browse/FLINK-18915
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.0, 1.11.1
>Reporter: wei
>Priority: Critical
> Fix For: 1.11.2
>
>
> # OrcBulkWriterFactory
> {code:java}
> @Override
> public BulkWriter create(FSDataOutputStream out) throws IOException {
>OrcFile.WriterOptions opts = getWriterOptions();
>opts.physicalWriter(new PhysicalWriterImpl(out, opts));
>return new OrcBulkWriter<>(vectorizer, new WriterImpl(null, FIXED_PATH, 
> opts));
> }{code}
>  
> # MemoryManagerImpl
> {code:java}
> // 
> public void addWriter(Path path, long requestedAllocation,
> Callback callback) throws IOException {
>   checkOwner();
>   WriterInfo oldVal = writerList.get(path);
>   // this should always be null, but we handle the case where the memory
>   // manager wasn't told that a writer wasn't still in use and the task
>   // starts writing to the same path.
>   if (oldVal == null) {
> oldVal = new WriterInfo(requestedAllocation, callback);
> writerList.put(path, oldVal);
> totalAllocation += requestedAllocation;
>   } else {
> // handle a new writer that is writing to the same path
> totalAllocation += requestedAllocation - oldVal.allocation;
> oldVal.allocation = requestedAllocation;
> oldVal.callback = callback;
>   }
>   updateScale(true);
> }
> {code}
> SinkTask may have multi BulkWriter create, FIXED_PATH will cause overlay the 
> last writer callback;Last writer's WriterImpl#checkMemory will never called;
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18915) FIXED_PATH(dummy Hadoop Path) with WriterImpl may cause ORC writer OOM

2020-08-31 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-18915:
-
Fix Version/s: 1.11.2

> FIXED_PATH(dummy Hadoop Path) with WriterImpl may cause ORC writer OOM
> --
>
> Key: FLINK-18915
> URL: https://issues.apache.org/jira/browse/FLINK-18915
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.0, 1.11.1
>Reporter: wei
>Priority: Major
> Fix For: 1.11.2
>
>
> # OrcBulkWriterFactory
> {code:java}
> @Override
> public BulkWriter create(FSDataOutputStream out) throws IOException {
>OrcFile.WriterOptions opts = getWriterOptions();
>opts.physicalWriter(new PhysicalWriterImpl(out, opts));
>return new OrcBulkWriter<>(vectorizer, new WriterImpl(null, FIXED_PATH, 
> opts));
> }{code}
>  
> # MemoryManagerImpl
> {code:java}
> // 
> public void addWriter(Path path, long requestedAllocation,
> Callback callback) throws IOException {
>   checkOwner();
>   WriterInfo oldVal = writerList.get(path);
>   // this should always be null, but we handle the case where the memory
>   // manager wasn't told that a writer wasn't still in use and the task
>   // starts writing to the same path.
>   if (oldVal == null) {
> oldVal = new WriterInfo(requestedAllocation, callback);
> writerList.put(path, oldVal);
> totalAllocation += requestedAllocation;
>   } else {
> // handle a new writer that is writing to the same path
> totalAllocation += requestedAllocation - oldVal.allocation;
> oldVal.allocation = requestedAllocation;
> oldVal.callback = callback;
>   }
>   updateScale(true);
> }
> {code}
> SinkTask may have multi BulkWriter create, FIXED_PATH will cause overlay the 
> last writer callback;Last writer's WriterImpl#checkMemory will never called;
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18915) FIXED_PATH(dummy Hadoop Path) with WriterImpl may cause ORC writer OOM

2020-08-31 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-18915:
-
Priority: Critical  (was: Major)

> FIXED_PATH(dummy Hadoop Path) with WriterImpl may cause ORC writer OOM
> --
>
> Key: FLINK-18915
> URL: https://issues.apache.org/jira/browse/FLINK-18915
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.0, 1.11.1
>Reporter: wei
>Priority: Critical
> Fix For: 1.11.2
>
>
> # OrcBulkWriterFactory
> {code:java}
> @Override
> public BulkWriter create(FSDataOutputStream out) throws IOException {
>OrcFile.WriterOptions opts = getWriterOptions();
>opts.physicalWriter(new PhysicalWriterImpl(out, opts));
>return new OrcBulkWriter<>(vectorizer, new WriterImpl(null, FIXED_PATH, 
> opts));
> }{code}
>  
> # MemoryManagerImpl
> {code:java}
> // 
> public void addWriter(Path path, long requestedAllocation,
> Callback callback) throws IOException {
>   checkOwner();
>   WriterInfo oldVal = writerList.get(path);
>   // this should always be null, but we handle the case where the memory
>   // manager wasn't told that a writer wasn't still in use and the task
>   // starts writing to the same path.
>   if (oldVal == null) {
> oldVal = new WriterInfo(requestedAllocation, callback);
> writerList.put(path, oldVal);
> totalAllocation += requestedAllocation;
>   } else {
> // handle a new writer that is writing to the same path
> totalAllocation += requestedAllocation - oldVal.allocation;
> oldVal.allocation = requestedAllocation;
> oldVal.callback = callback;
>   }
>   updateScale(true);
> }
> {code}
> SinkTask may have multi BulkWriter create, FIXED_PATH will cause overlay the 
> last writer callback;Last writer's WriterImpl#checkMemory will never called;
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13273: [FLINK-18801][docs][python] Add a "10 minutes to Table API" document under the "Python API" -> "User Guide" -> "Table API" section.

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13273:
URL: https://github.com/apache/flink/pull/13273#issuecomment-682353427


   
   ## CI report:
   
   * a7e0b78ef82e5634f75a865cd0e705bfa795f901 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6012)
 
   * f71ac48908f7dbf2f10a9145262b8438d1f051c4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on a change in pull request #13271: [FLINK-19043][docs-zh] Translate the 'Logging' page of 'Debugging & Monitoring' into Chinese

2020-08-31 Thread GitBox


RocMarshal commented on a change in pull request #13271:
URL: https://github.com/apache/flink/pull/13271#discussion_r480620827



##
File path: docs/monitoring/logging.zh.md
##
@@ -23,47 +23,51 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-The logging in Flink is implemented using the slf4j logging interface. As 
underlying logging framework, log4j2 is used. We also provide logback 
configuration files and pass them to the JVM's as properties. Users willing to 
use logback instead of log4j2 can just exclude log4j2 (or delete it from the 
lib/ folder).
+Flink 中的日志记录是使用 slf4j 日志接口实现的。使用 log4j2 作为底层日志框架。我们也支持了 logback 
日志配置,只要将其配置文件作为参数传递给 JVM 即可。愿意使用 logback 而不是 log4j2 的用户只需排除 log4j2 的依赖(或从 lib/ 
文件夹中删除它)即可。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Configuring Log4j2
+
 
-Log4j2 is controlled using property files. In Flink's case, the file is 
usually called `log4j.properties`. We pass the filename and location of this 
file using the `-Dlog4j.configurationFile=` parameter to the JVM.
+## 配置 Log4j2
 
-Flink ships with the following default properties files:
+Log4j2 是使用配置文件指定的。在 Flink 的使用中,该文件通常命名为 `log4j.properties`。我们使用 
`-Dlog4j.configurationFile=` 参数将该文件的文件名和位置传递给 JVM。
 
-- `log4j-cli.properties`: Used by the Flink command line client (e.g. `flink 
run`) (not code executed on the cluster)
-- `log4j-session.properties`: Used by the Flink command line client when 
starting a YARN or Kubernetes session (`yarn-session.sh`, 
`kubernetes-session.sh`)
-- `log4j.properties`: JobManager/Taskmanager logs (both standalone and YARN)
+Flink 附带以下默认日志配置文件:
 
-### Compatibility with Log4j1
+- `log4j-cli.properties`:由 Flink 命令行客户端使用(例如 `flink run`)(不包括在集群上执行的代码)
+- `log4j-session.properties`:Flink 命令行客户端在启动 YARN 或 Kubernetes session 
时使用(`yarn-session.sh`,`kubernetes-session.sh`)
+- `log4j.properties`:作为 JobManager/TaskManager 日志配置使用(standalone 和 YARN 
两种模式下皆使用)
 
-Flink ships with the [Log4j API 
bridge](https://logging.apache.org/log4j/log4j-2.2/log4j-1.2-api/index.html), 
allowing existing applications that work against Log4j1 classes to continue 
working.
+
 
-If you have custom Log4j1 properties files or code that relies on Log4j1, 
please check out the official Log4j 
[compatibility](https://logging.apache.org/log4j/2.x/manual/compatibility.html) 
and [migration](https://logging.apache.org/log4j/2.x/manual/migration.html) 
guides.
+### 与 Log4j1 的兼容性
 
-## Configuring Log4j1
+Flink 附带了 [Log4j API 
bridge](https://logging.apache.org/log4j/log4j-2.2/log4j-1.2-api/index.html),使得对
 Log4j1 工作的现有应用程序继续工作。

Review comment:
   ```suggestion
   Flink 附带了 [Log4j API 
bridge](https://logging.apache.org/log4j/log4j-2.2/log4j-1.2-api/index.html),使得现有作业能够继续使用
 log4j1 的接口。
   ```

##
File path: docs/monitoring/logging.zh.md
##
@@ -81,17 +85,19 @@ The provided `logback.xml` has the following form:
 
 {% endhighlight %}
 
-In order to control the logging level of 
`org.apache.flink.runtime.jobgraph.JobGraph`, for example, one would have to 
add the following line to the configuration file.
+例如,为了控制 `org.apache.flink.runtime.jobgraph.JobGraph` 的日志记录级别,必须将以下行添加到配置文件中。
 
 {% highlight xml %}
 
 {% endhighlight %}
 
-For further information on configuring logback see [LOGback's 
manual](http://logback.qos.ch/manual/configuration.html).
+有关配置日志的更多信息,请参见 [LOGback 手册](http://logback.qos.ch/manual/configuration.html)。
+
+
 
-## Best practices for developers
+## 开发人员的最佳实践
 
-The loggers using slf4j are created by calling
+Slf4j 的 loggers 通过调用来创建

Review comment:
   ```suggestion
   Slf4j 的 loggers 通过调用 `LoggerFactory` 的 `getLogger()` 方法创建
   ```

##
File path: docs/monitoring/logging.zh.md
##
@@ -23,47 +23,51 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-The logging in Flink is implemented using the slf4j logging interface. As 
underlying logging framework, log4j2 is used. We also provide logback 
configuration files and pass them to the JVM's as properties. Users willing to 
use logback instead of log4j2 can just exclude log4j2 (or delete it from the 
lib/ folder).
+Flink 中的日志记录是使用 slf4j 日志接口实现的。使用 log4j2 作为底层日志框架。我们也支持了 logback 
日志配置,只要将其配置文件作为参数传递给 JVM 即可。愿意使用 logback 而不是 log4j2 的用户只需排除 log4j2 的依赖(或从 lib/ 
文件夹中删除它)即可。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Configuring Log4j2
+
 
-Log4j2 is controlled using property files. In Flink's case, the file is 
usually called `log4j.properties`. We pass the filename and location of this 
file using the `-Dlog4j.configurationFile=` parameter to the JVM.
+## 配置 Log4j2
 
-Flink ships with the following default properties files:
+Log4j2 是使用配置文件指定的。在 Flink 的使用中,该文件通常命名为 `log4j.properties`。我们使用 
`-Dlog4j.configurationFile=` 参数将该文件的文件名和位置传递给 JVM。
 
-- `log4j-cli.properties`: Used by the Flink command line client (e.g. `flink 
run`) (not code executed on the cluster)
-- `log4j-session.properties`: Used by the Flink command line client when 

[GitHub] [flink] flinkbot edited a comment on pull request #13289: [FLINK-18548][table-planner] support flexible syntax for Temporal table join

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13289:
URL: https://github.com/apache/flink/pull/13289#issuecomment-683894616


   
   ## CI report:
   
   * 0ab49320f3f6b4ea1f2bd4539dd5dc64d98c4dbb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6023)
 
   * 78e195f2120b88bff90da918c1a5a0ae5327285f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6029)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-19082) Add docs for temporal table and temporal table join

2020-08-31 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-19082:


Assignee: Leonard Xu

> Add docs for temporal table and temporal table join
> ---
>
> Key: FLINK-19082
> URL: https://issues.apache.org/jira/browse/FLINK-19082
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19081) Deprecate TemporalTableFunction and Table#createTemporalTableFunction()

2020-08-31 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-19081:


Assignee: Leonard Xu

> Deprecate TemporalTableFunction and Table#createTemporalTableFunction()
> ---
>
> Key: FLINK-19081
> URL: https://issues.apache.org/jira/browse/FLINK-19081
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19079) Support row time deduplicate operator

2020-08-31 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-19079:


Assignee: Leonard Xu

> Support row time deduplicate operator
> -
>
> Key: FLINK-19079
> URL: https://issues.apache.org/jira/browse/FLINK-19079
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>
> To convert a insert-only table to versioned table, the recommended way is use 
> deduplicate query as following, the converted *versioned_view* owns primary 
> key and event time and thus can be a versioned table.
> {code:java}
> CREATE VIEW versioned_rates AS
> SELECT currency, rate, currency_time
> FROM (
>       SELECT *,
>       ROW_NUMBER() OVER (PARTITION BY currency   -- inferred primary key
> ORDER BY currency_time   -- the event time  
> DESC) AS rowNum
>   FROM rates)
> WHERE rowNum = 1;
> {code}
> But currently deduplicate operator only support on process time, this issue 
> aims to support deduplicate on Event time.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19080) Materialize TimeIndicatorRelDataType in the right input of temporal join

2020-08-31 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-19080:


Assignee: Leonard Xu

> Materialize TimeIndicatorRelDataType in the right input of temporal join
> 
>
> Key: FLINK-19080
> URL: https://issues.apache.org/jira/browse/FLINK-19080
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>
> The output of temporal join should follow left stream's time attribute,  we 
> should materialize the time attribute (TimeIndicatorRelDataType) if the right 
> input contains time attribute .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19078) Import rowtime join temporal operator

2020-08-31 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-19078:


Assignee: Leonard Xu

> Import rowtime join temporal operator
> -
>
> Key: FLINK-19078
> URL: https://issues.apache.org/jira/browse/FLINK-19078
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>
> import *TemporalRowTimeJoinOperator* for EventTime-Time temporal join.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19077) Import process time temporal join operator

2020-08-31 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-19077:


Assignee: Leonard Xu

> Import process time temporal join operator
> --
>
> Key: FLINK-19077
> URL: https://issues.apache.org/jira/browse/FLINK-19077
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>
> import *TemporalProcessTimeJoinOperator* for Processing-Time temporal join.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19076) Import rule to deal Temporal Join condition

2020-08-31 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-19076:


Assignee: Leonard Xu

> Import rule to deal Temporal Join condition
> ---
>
> Key: FLINK-19076
> URL: https://issues.apache.org/jira/browse/FLINK-19076
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>
> Temporal join is a correlate, the right of correlate is a *Snapshot*, the 
> *period* of *snapshot* comes from left table's time attribute column. 
> the time attribute column may be pruned if the downstream `RelNode` did not 
> reference it any more, so I need to keep the necessary  condition(e.g., left 
> time attribute, right primary key) for temporal join node in join condition.
> Given an example:
> {code:java}
> SELECT o.order_id, o.currency, o.amount, r.rate,r.rowtime 
>  FROM orders_proctime AS o JOIN 
>  versioned_currency 
>  FOR SYSTEM_TIME AS OF o.rowtime as r 
>  ON o.currency = r.currency{code}
> The select clause did not use `o.rowtime` and thus the column `o.rowtime` 
> will be removed later, but we need the `o.rowtime` in temporal join node, so 
> I keep the `o.rowtime` in temporal join condition just like[1].
> [1] 
> [https://github.com/apache/flink/blob/c601cfd662c2839f8ebc81b80879ecce55a8cbaf/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/TemporalJoinUtil.scala]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19073) Improve streamExecTemporalJoinRule

2020-08-31 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-19073:


Assignee: Leonard Xu

> Improve streamExecTemporalJoinRule
> --
>
> Key: FLINK-19073
> URL: https://issues.apache.org/jira/browse/FLINK-19073
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>
> Improve streamExecTemporalJoinRule that matches a temporal join node and 
> converts it to [[StreamExecTemporalJoin]],
> the temporal join node is a [[FlinkLogicalJoin]] whose left input is a 
> [[FlinkLogicalRel]] and right input is a [[FlinkLogicalSnapshot]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19072) Import Temporal Table join rule

2020-08-31 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-19072:


Assignee: Leonard Xu

> Import Temporal Table join rule
> ---
>
> Key: FLINK-19072
> URL: https://issues.apache.org/jira/browse/FLINK-19072
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>
> The initial temporal table join (FOR SYSTEM_TIME AS OF) is a Correlate, we 
> need rewrite it into a Join to make join condition can be pushed-down. The 
> join will be translated into  [[StreamExecLookupJoin]]  or  
> [[StreamExecTemporalJoin]] in physical. 
> Import  Temporal table join rule to deal above temporal join rewrite logic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19093) "Elasticsearch (v6.3.1) sink end-to-end test" failed with "SubtaskCheckpointCoordinatorImpl was closed without closing asyncCheckpointRunnable 1"

2020-08-31 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17188086#comment-17188086
 ] 

Dian Fu commented on FLINK-19093:
-

Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6027=logs=68a897ab-3047-5660-245a-cce8f83859f6=16ca2cca-2f63-5cce-12d2-d519b930a729

> "Elasticsearch (v6.3.1) sink end-to-end test" failed with 
> "SubtaskCheckpointCoordinatorImpl was closed without closing 
> asyncCheckpointRunnable 1"
> -
>
> Key: FLINK-19093
> URL: https://issues.apache.org/jira/browse/FLINK-19093
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=5986=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=3425d8ba-5f03-540a-c64b-51b8481bf7d6
> {code}
> 2020-08-29T22:20:02.3500263Z 2020-08-29 22:20:00,851 INFO  
> org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable [] - Source: 
> Sequence Source -> Flat Map -> Sink: Unnamed (1/1) - asynchronous part of 
> checkpoint 1 could not be completed.
> 2020-08-29T22:20:02.3501112Z java.lang.IllegalStateException: 
> SubtaskCheckpointCoordinatorImpl was closed without closing 
> asyncCheckpointRunnable 1
> 2020-08-29T22:20:02.3502049Z  at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:217) 
> ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-08-29T22:20:02.3503280Z  at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.registerAsyncCheckpointRunnable(SubtaskCheckpointCoordinatorImpl.java:371)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-08-29T22:20:02.3504647Z  at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.lambda$registerConsumer$2(SubtaskCheckpointCoordinatorImpl.java:479)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-08-29T22:20:02.3505882Z  at 
> org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.run(AsyncCheckpointRunnable.java:95)
>  [flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-08-29T22:20:02.3506614Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_265]
> 2020-08-29T22:20:02.3507203Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_265]
> 2020-08-29T22:20:02.3507685Z  at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_265]
> 2020-08-29T22:20:02.3509577Z 2020-08-29 22:20:00,927 INFO  
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Free slot 
> TaskSlot(index:0, state:ACTIVE, resource profile: 
> ResourceProfile{cpuCores=1., taskHeapMemory=384.000mb 
> (402653174 bytes), taskOffHeapMemory=0 bytes, managedMemory=512.000mb 
> (536870920 bytes), networkMemory=128.000mb (134217730 bytes)}, allocationId: 
> ca890bc4df19c66146370647d07bf510, jobId: 3522a3e4940d4b3cefc6dc1f22123f4b).
> 2020-08-29T22:20:02.3511425Z 2020-08-29 22:20:00,939 INFO  
> org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Remove job 
> 3522a3e4940d4b3cefc6dc1f22123f4b from job leader monitoring.
> 2020-08-29T22:20:02.3512499Z 2020-08-29 22:20:00,939 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutor   [] - Close 
> JobManager connection for job 3522a3e4940d4b3cefc6dc1f22123f4b.
> 2020-08-29T22:20:02.3513174Z Checking for non-empty .out files...
> 2020-08-29T22:20:02.3513706Z No non-empty .out files.
> 2020-08-29T22:20:02.3513878Z 
> 2020-08-29T22:20:02.3514679Z [FAIL] 'Elasticsearch (v6.3.1) sink end-to-end 
> test' failed after 0 minutes and 37 seconds! Test exited with exit code 0 but 
> the logs contained errors, exceptions or non-empty .out files
> 2020-08-29T22:20:02.3515138Z 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13289: [FLINK-18548][table-planner] support flexible syntax for Temporal table join

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13289:
URL: https://github.com/apache/flink/pull/13289#issuecomment-683894616


   
   ## CI report:
   
   * 0ab49320f3f6b4ea1f2bd4539dd5dc64d98c4dbb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6023)
 
   * 78e195f2120b88bff90da918c1a5a0ae5327285f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] morsapaes commented on pull request #375: [blog] Flink Community Update - August'20

2020-08-31 Thread GitBox


morsapaes commented on pull request #375:
URL: https://github.com/apache/flink-web/pull/375#issuecomment-684136724


   @rmetzger or @sjwiesman, if you can have a look and +1 this, I'd appreciate 
it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] morsapaes opened a new pull request #375: [blog] Flink Community Update - August'20

2020-08-31 Thread GitBox


morsapaes opened a new pull request #375:
URL: https://github.com/apache/flink-web/pull/375


   Slightly delayed, but adding a Community Update blogpost for August 2020.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] morsapaes commented on pull request #373: Add blog post: Memory Management improvements for Flink’s JobManager in Apache Flink 1.11

2020-08-31 Thread GitBox


morsapaes commented on pull request #373:
URL: https://github.com/apache/flink-web/pull/373#issuecomment-684124219


   @azagrebin, there are a few extra files committed that shouldn't be there. 
Would you mind removing anything that is not 1) your blogpost **.md** file and 
2) any **image** files?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] morsapaes commented on a change in pull request #373: Add blog post: Memory Management improvements for Flink’s JobManager in Apache Flink 1.11

2020-08-31 Thread GitBox


morsapaes commented on a change in pull request #373:
URL: https://github.com/apache/flink-web/pull/373#discussion_r480506406



##
File path: _posts/2020-09-01-flink-1.11-memory-management-improvements.md
##
@@ -0,0 +1,107 @@
+---
+layout: post
+title: "Memory Management improvements for Flink’s JobManager in Apache Flink 
1.11"
+date: 2020-09-01T15:30:00.000Z
+authors:
+- Andrey:
+  name: "Andrey Zagrebin"
+categories: news

Review comment:
   Please don't use this category for regular blogposts.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] morsapaes commented on a change in pull request #373: Add blog post: Memory Management improvements for Flink’s JobManager in Apache Flink 1.11

2020-08-31 Thread GitBox


morsapaes commented on a change in pull request #373:
URL: https://github.com/apache/flink-web/pull/373#discussion_r480505850



##
File path: _posts/2020-09-01-flink-1.11-memory-management-improvements.md
##
@@ -0,0 +1,107 @@
+---
+layout: post
+title: "Memory Management improvements for Flink’s JobManager in Apache Flink 
1.11"
+date: 2020-09-01T15:30:00.000Z
+authors:
+- Andrey:
+  name: "Andrey Zagrebin"
+categories: news

Review comment:
   ```suggestion
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13287: [FLINK-13857] Remove deprecated ExecutionConfig#get/setCodeAnalysisMode

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13287:
URL: https://github.com/apache/flink/pull/13287#issuecomment-683810669


   
   ## CI report:
   
   * 60725808134a5f873d35a5f63dce893194a63238 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6019)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13179: [FLINK-18978][state-backends] Support full table scan of key and namespace from statebackend

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13179:
URL: https://github.com/apache/flink/pull/13179#issuecomment-674957571


   
   ## CI report:
   
   * 04d5121a5f791efa1ae549ced4bd21e8b36bf19b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6026)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-benchmarks] rkhachatryan commented on a change in pull request #3: [FLINK-18905] Provide basic benchmarks for MultipleInputStreamOperator

2020-08-31 Thread GitBox


rkhachatryan commented on a change in pull request #3:
URL: https://github.com/apache/flink-benchmarks/pull/3#discussion_r480334965



##
File path: src/main/java/org/apache/flink/benchmark/MultipleInputBenchmark.java
##
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.benchmark;
+
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.benchmark.functions.LongSource;
+import org.apache.flink.benchmark.functions.QueuingLongSource;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.datastream.MultipleConnectedStreams;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.DiscardingSink;
+import org.apache.flink.streaming.api.operators.AbstractInput;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperatorFactory;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperatorV2;
+import org.apache.flink.streaming.api.operators.Input;
+import org.apache.flink.streaming.api.operators.MultipleInputStreamOperator;
+import org.apache.flink.streaming.api.operators.StreamOperator;
+import org.apache.flink.streaming.api.operators.StreamOperatorParameters;
+import 
org.apache.flink.streaming.api.transformations.MultipleInputTransformation;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.OperationsPerInvocation;
+import org.openjdk.jmh.runner.Runner;
+import org.openjdk.jmh.runner.RunnerException;
+import org.openjdk.jmh.runner.options.Options;
+import org.openjdk.jmh.runner.options.OptionsBuilder;
+import org.openjdk.jmh.runner.options.VerboseMode;
+
+import java.util.Arrays;
+import java.util.List;
+
+public class MultipleInputBenchmark extends BenchmarkBase {
+
+   public static final int RECORDS_PER_INVOCATION = 
TwoInputBenchmark.RECORDS_PER_INVOCATION;
+   public static final int ONE_IDLE_RECORDS_PER_INVOCATION = 
TwoInputBenchmark.ONE_IDLE_RECORDS_PER_INVOCATION;
+   public static final long CHECKPOINT_INTERVAL_MS = 
TwoInputBenchmark.CHECKPOINT_INTERVAL_MS;
+
+   public static void main(String[] args)
+   throws RunnerException {
+   Options options = new OptionsBuilder()
+   .verbosity(VerboseMode.NORMAL)
+   .include(".*" + 
MultipleInputBenchmark.class.getSimpleName() + ".*")
+   .build();
+
+   new Runner(options).run();
+   }
+
+   @Benchmark
+   @OperationsPerInvocation(RECORDS_PER_INVOCATION)
+   public void multiInputMapSink(FlinkEnvironmentContext context) throws 
Exception {
+
+   StreamExecutionEnvironment env = context.env;
+
+   env.enableCheckpointing(CHECKPOINT_INTERVAL_MS);
+   env.setParallelism(1);
+   env.setRestartStrategy(RestartStrategies.noRestart());
+
+   // Setting buffer timeout to 1 is an attempt to improve 
twoInputMapSink benchmark stability.
+   // Without 1ms buffer timeout, some JVM forks are much slower 
then others, making results
+   // unstable and unreliable.
+   env.setBufferTimeout(1);
+
+   long numRecordsPerInput = RECORDS_PER_INVOCATION / 2;
+   DataStreamSource source1 = env.addSource(new 
LongSource(numRecordsPerInput));
+   DataStreamSource source2 = env.addSource(new 
LongSource(numRecordsPerInput));
+   connectAndDiscard(env, source1, source2);
+
+   env.execute();
+   }
+
+   @Benchmark
+   @OperationsPerInvocation(ONE_IDLE_RECORDS_PER_INVOCATION)
+   public void multiInputOneIdleMapSink(FlinkEnvironmentContext context) 
throws Exception {
+
+   StreamExecutionEnvironment env = context.env;
+   env.enableCheckpointing(CHECKPOINT_INTERVAL_MS);
+   env.setParallelism(1);
+
+   QueuingLongSource.reset();
+   DataStreamSource source1 = 

[GitHub] [flink-benchmarks] rkhachatryan commented on a change in pull request #3: [FLINK-18905] Provide basic benchmarks for MultipleInputStreamOperator

2020-08-31 Thread GitBox


rkhachatryan commented on a change in pull request #3:
URL: https://github.com/apache/flink-benchmarks/pull/3#discussion_r480330566



##
File path: src/main/java/org/apache/flink/benchmark/MultipleInputBenchmark.java
##
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.benchmark;
+
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.benchmark.functions.LongSource;
+import org.apache.flink.benchmark.functions.QueuingLongSource;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.datastream.MultipleConnectedStreams;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.DiscardingSink;
+import org.apache.flink.streaming.api.operators.AbstractInput;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperatorFactory;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperatorV2;
+import org.apache.flink.streaming.api.operators.Input;
+import org.apache.flink.streaming.api.operators.MultipleInputStreamOperator;
+import org.apache.flink.streaming.api.operators.StreamOperator;
+import org.apache.flink.streaming.api.operators.StreamOperatorParameters;
+import 
org.apache.flink.streaming.api.transformations.MultipleInputTransformation;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.OperationsPerInvocation;
+import org.openjdk.jmh.runner.Runner;
+import org.openjdk.jmh.runner.RunnerException;
+import org.openjdk.jmh.runner.options.Options;
+import org.openjdk.jmh.runner.options.OptionsBuilder;
+import org.openjdk.jmh.runner.options.VerboseMode;
+
+import java.util.Arrays;
+import java.util.List;
+
+public class MultipleInputBenchmark extends BenchmarkBase {
+
+   public static final int RECORDS_PER_INVOCATION = 
TwoInputBenchmark.RECORDS_PER_INVOCATION;
+   public static final int ONE_IDLE_RECORDS_PER_INVOCATION = 
TwoInputBenchmark.ONE_IDLE_RECORDS_PER_INVOCATION;
+   public static final long CHECKPOINT_INTERVAL_MS = 
TwoInputBenchmark.CHECKPOINT_INTERVAL_MS;
+
+   public static void main(String[] args)
+   throws RunnerException {
+   Options options = new OptionsBuilder()
+   .verbosity(VerboseMode.NORMAL)
+   .include(".*" + 
MultipleInputBenchmark.class.getSimpleName() + ".*")
+   .build();
+
+   new Runner(options).run();
+   }
+
+   @Benchmark
+   @OperationsPerInvocation(RECORDS_PER_INVOCATION)
+   public void multiInputMapSink(FlinkEnvironmentContext context) throws 
Exception {
+
+   StreamExecutionEnvironment env = context.env;
+
+   env.enableCheckpointing(CHECKPOINT_INTERVAL_MS);
+   env.setParallelism(1);
+   env.setRestartStrategy(RestartStrategies.noRestart());
+
+   // Setting buffer timeout to 1 is an attempt to improve 
twoInputMapSink benchmark stability.
+   // Without 1ms buffer timeout, some JVM forks are much slower 
then others, making results
+   // unstable and unreliable.
+   env.setBufferTimeout(1);
+
+   long numRecordsPerInput = RECORDS_PER_INVOCATION / 2;
+   DataStreamSource source1 = env.addSource(new 
LongSource(numRecordsPerInput));
+   DataStreamSource source2 = env.addSource(new 
LongSource(numRecordsPerInput));
+   connectAndDiscard(env, source1, source2);
+
+   env.execute();
+   }
+
+   @Benchmark
+   @OperationsPerInvocation(ONE_IDLE_RECORDS_PER_INVOCATION)
+   public void multiInputOneIdleMapSink(FlinkEnvironmentContext context) 
throws Exception {
+
+   StreamExecutionEnvironment env = context.env;
+   env.enableCheckpointing(CHECKPOINT_INTERVAL_MS);
+   env.setParallelism(1);
+
+   QueuingLongSource.reset();
+   DataStreamSource source1 = 

[GitHub] [flink] flinkbot edited a comment on pull request #13290: [FLINK-19100] Update the avro format in respect to required dependencies

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13290:
URL: https://github.com/apache/flink/pull/13290#issuecomment-683917073


   
   ## CI report:
   
   * defdd05c62c6d9945cfc5964942b220f6d8952ee Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6024)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13179: [FLINK-18978][state-backends] Support full table scan of key and namespace from statebackend

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13179:
URL: https://github.com/apache/flink/pull/13179#issuecomment-674957571


   
   ## CI report:
   
   * cab18cf6d00a0bb3af1e4bf6815e58c96359ee3a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5632)
 
   * 04d5121a5f791efa1ae549ced4bd21e8b36bf19b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on pull request #13179: [FLINK-18978][state-backends] Support full table scan of key and namespace from statebackend

2020-08-31 Thread GitBox


sjwiesman commented on pull request #13179:
URL: https://github.com/apache/flink/pull/13179#issuecomment-683960365


   @Myasuka Thank you for taking a look, sorry for the delay I have been on 
vacation. Please take another look when you have time. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on a change in pull request #13179: [FLINK-18978][state-backends] Support full table scan of key and namespace from statebackend

2020-08-31 Thread GitBox


sjwiesman commented on a change in pull request #13179:
URL: https://github.com/apache/flink/pull/13179#discussion_r480317878



##
File path: 
flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksDBRocksStateKeysAndNamespacesIteratorTest.java
##
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.contrib.streaming.state;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.state.ValueState;
+import org.apache.flink.api.common.state.ValueStateDescriptor;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.common.typeutils.base.IntSerializer;
+import org.apache.flink.api.common.typeutils.base.StringSerializer;
+import org.apache.flink.api.java.tuple.Tuple2;
+import 
org.apache.flink.contrib.streaming.state.iterator.RocksStateKeysAndNamespaceIterator;
+import org.apache.flink.core.fs.CloseableRegistry;
+import org.apache.flink.core.memory.DataOutputSerializer;
+import org.apache.flink.metrics.groups.UnregisteredMetricsGroup;
+import org.apache.flink.runtime.operators.testutils.MockEnvironment;
+import org.apache.flink.runtime.query.TaskKvStateRegistry;
+import org.apache.flink.runtime.state.KeyGroupRange;
+import org.apache.flink.runtime.state.filesystem.FsStateBackend;
+import org.apache.flink.runtime.state.ttl.TtlTimeProvider;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.rocksdb.ColumnFamilyHandle;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+import java.util.function.Function;
+
+import static org.mockito.Mockito.mock;
+
+/**
+ * Tests for the RocksIteratorWrapper.
+ */
+public class RocksDBRocksStateKeysAndNamespacesIteratorTest {

Review comment:
   I pulled out some test utilities but these two are subtly different in 
ways that trying to really combine them turned into a mess. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13287: [FLINK-13857] Remove deprecated ExecutionConfig#get/setCodeAnalysisMode

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13287:
URL: https://github.com/apache/flink/pull/13287#issuecomment-683810669


   
   ## CI report:
   
   * 2557b8aae99f79911f57a6ddbb006038d8ad9f61 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6017)
 
   * 60725808134a5f873d35a5f63dce893194a63238 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6019)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on a change in pull request #13179: [FLINK-18978][state-backends] Support full table scan of key and namespace from statebackend

2020-08-31 Thread GitBox


sjwiesman commented on a change in pull request #13179:
URL: https://github.com/apache/flink/pull/13179#discussion_r480295815



##
File path: 
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/iterator/RocksStateKeysAndNamespaceIterator.java
##
@@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.contrib.streaming.state.iterator;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.contrib.streaming.state.RocksDBKeySerializationUtils;
+import org.apache.flink.contrib.streaming.state.RocksIteratorWrapper;
+import org.apache.flink.core.memory.DataInputDeserializer;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nonnull;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.NoSuchElementException;
+
+/**
+ * Adapter class to bridge between {@link RocksIteratorWrapper} and {@link 
Iterator} to iterate over the keys
+ * and namespaces. This class is not thread safe.
+ *
+ * @param  the type of the iterated keys in RocksDB.
+ * @param  the type of the iterated namespaces in RocksDB.
+ */
+public class RocksStateKeysAndNamespaceIterator implements 
Iterator>, AutoCloseable {

Review comment:
   -1 they are semantically similar but their `hasNext` methods are very 
different. In particular, the way they use the namespace bytes is quite 
different and I don't think a base class makes sense here. They both already 
share code via the `RocksDBSeySerializeionUtils` class. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18977) Extract WindowOperator construction into a builder class

2020-08-31 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman closed FLINK-18977.

Resolution: Fixed

>  Extract WindowOperator construction into a builder class
> -
>
> Key: FLINK-18977
> URL: https://issues.apache.org/jira/browse/FLINK-18977
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Extracts the logic from WindowedStream into a builder class so that there is 
> one definitive way to create and configure the window operator. This is a 
> pre-requisite to supporting the window operator in the state processor api. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18977) Extract WindowOperator construction into a builder class

2020-08-31 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17187920#comment-17187920
 ] 

Seth Wiesman commented on FLINK-18977:
--

fixed in master: fe867a6f55e84aad803a12e5df31074a1404a9e8

>  Extract WindowOperator construction into a builder class
> -
>
> Key: FLINK-18977
> URL: https://issues.apache.org/jira/browse/FLINK-18977
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Extracts the logic from WindowedStream into a builder class so that there is 
> one definitive way to create and configure the window operator. This is a 
> pre-requisite to supporting the window operator in the state processor api. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] sjwiesman commented on pull request #13178: [FLINK-18977][datastream] Extract WindowOperator construction into a builder class

2020-08-31 Thread GitBox


sjwiesman commented on pull request #13178:
URL: https://github.com/apache/flink/pull/13178#issuecomment-683933447


   grazie, aggiusterà e fonderà



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman closed pull request #13178: [FLINK-18977][datastream] Extract WindowOperator construction into a builder class

2020-08-31 Thread GitBox


sjwiesman closed pull request #13178:
URL: https://github.com/apache/flink/pull/13178


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on a change in pull request #13178: [FLINK-18977][datastream] Extract WindowOperator construction into a builder class

2020-08-31 Thread GitBox


sjwiesman commented on a change in pull request #13178:
URL: https://github.com/apache/flink/pull/13178#discussion_r480291093



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorBuilder.java
##
@@ -0,0 +1,378 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.runtime.operators.windowing;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.functions.AggregateFunction;
+import org.apache.flink.api.common.functions.FoldFunction;
+import org.apache.flink.api.common.functions.Function;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.RichFunction;
+import org.apache.flink.api.common.state.AggregatingStateDescriptor;
+import org.apache.flink.api.common.state.AppendingState;
+import org.apache.flink.api.common.state.FoldingStateDescriptor;
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import org.apache.flink.api.common.state.ReducingStateDescriptor;
+import org.apache.flink.api.common.state.StateDescriptor;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.functions.KeySelector;
+import 
org.apache.flink.streaming.api.functions.windowing.AggregateApplyWindowFunction;
+import 
org.apache.flink.streaming.api.functions.windowing.FoldApplyProcessWindowFunction;
+import 
org.apache.flink.streaming.api.functions.windowing.FoldApplyWindowFunction;
+import 
org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;
+import 
org.apache.flink.streaming.api.functions.windowing.ReduceApplyProcessWindowFunction;
+import 
org.apache.flink.streaming.api.functions.windowing.ReduceApplyWindowFunction;
+import org.apache.flink.streaming.api.functions.windowing.WindowFunction;
+import 
org.apache.flink.streaming.api.windowing.assigners.BaseAlignedWindowAssigner;
+import 
org.apache.flink.streaming.api.windowing.assigners.MergingWindowAssigner;
+import org.apache.flink.streaming.api.windowing.assigners.WindowAssigner;
+import org.apache.flink.streaming.api.windowing.evictors.Evictor;
+import org.apache.flink.streaming.api.windowing.time.Time;
+import org.apache.flink.streaming.api.windowing.triggers.Trigger;
+import org.apache.flink.streaming.api.windowing.windows.Window;
+import 
org.apache.flink.streaming.runtime.operators.windowing.functions.InternalAggregateProcessWindowFunction;
+import 
org.apache.flink.streaming.runtime.operators.windowing.functions.InternalIterableProcessWindowFunction;
+import 
org.apache.flink.streaming.runtime.operators.windowing.functions.InternalIterableWindowFunction;
+import 
org.apache.flink.streaming.runtime.operators.windowing.functions.InternalSingleValueProcessWindowFunction;
+import 
org.apache.flink.streaming.runtime.operators.windowing.functions.InternalSingleValueWindowFunction;
+import 
org.apache.flink.streaming.runtime.operators.windowing.functions.InternalWindowFunction;
+import org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.util.OutputTag;
+import org.apache.flink.util.Preconditions;
+
+import javax.annotation.Nullable;
+
+import java.lang.reflect.Type;
+
+/**
+ * A builder for creating {@code WindowOperator}'s.

Review comment:
   fancy





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13288: [FLINK-19084] Remove deprecated methods from ExecutionConfig

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13288:
URL: https://github.com/apache/flink/pull/13288#issuecomment-683810782


   
   ## CI report:
   
   * 40a28918bfb1a79f700fd35a916999d83d36acb8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6020)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13271: [FLINK-19043][docs-zh] Translate the 'Logging' page of 'Debugging & Monitoring' into Chinese

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13271:
URL: https://github.com/apache/flink/pull/13271#issuecomment-682323547


   
   ## CI report:
   
   * 64a6d4e0e88b1b73b86a00a0dc7ef7e59e538967 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6022)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13290: [FLINK-19100] Update the avro format in respect to required dependencies

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13290:
URL: https://github.com/apache/flink/pull/13290#issuecomment-683917073


   
   ## CI report:
   
   * defdd05c62c6d9945cfc5964942b220f6d8952ee Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6024)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13235: [FLINK-19036][docs-zh] Translate page 'Application Profiling & Debugging' of 'Debugging & Monitoring' into Chinese

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13235:
URL: https://github.com/apache/flink/pull/13235#issuecomment-679625638


   
   ## CI report:
   
   * 2375ec0fb6b8de2cffbb3fb8af54fe5b701353f4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6021)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] igalshilman commented on a change in pull request #137: [FLINK-19102] [core, sdk] Make StateBinder a per-FunctionType entity

2020-08-31 Thread GitBox


igalshilman commented on a change in pull request #137:
URL: https://github.com/apache/flink-statefun/pull/137#discussion_r480276203



##
File path: 
statefun-sdk/src/main/java/org/apache/flink/statefun/sdk/state/PersistedAppendingBuffer.java
##
@@ -160,6 +160,13 @@ public void clear() {
 accessor.clear();
   }
 
+  @Override

Review comment:
   I think that it would be somewhat not intuitive for users not to see the 
actual value, what do you think?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13290: [FLINK-19100] Update the avro format in respect to required dependencies

2020-08-31 Thread GitBox


flinkbot commented on pull request #13290:
URL: https://github.com/apache/flink/pull/13290#issuecomment-683917073


   
   ## CI report:
   
   * defdd05c62c6d9945cfc5964942b220f6d8952ee UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13286: [FLINK-19093][task] Fix isActive check in AsyncCheckpointRunnable

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13286:
URL: https://github.com/apache/flink/pull/13286#issuecomment-683777071


   
   ## CI report:
   
   * 358ebf0c0aaffb505bc95a97d6133183ec749d7a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6016)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13289: [FLINK-18548][table-planner] support flexible syntax for Temporal table join

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13289:
URL: https://github.com/apache/flink/pull/13289#issuecomment-683894616


   
   ## CI report:
   
   * 0ab49320f3f6b4ea1f2bd4539dd5dc64d98c4dbb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6023)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13290: [FLINK-19100] Update the avro format in respect to required dependencies

2020-08-31 Thread GitBox


flinkbot commented on pull request #13290:
URL: https://github.com/apache/flink/pull/13290#issuecomment-683909440


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit defdd05c62c6d9945cfc5964942b220f6d8952ee (Mon Aug 31 
17:08:39 UTC 2020)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19100) Fix note about hadoop dependency in flink-avro

2020-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19100:
---
Labels: pull-request-available  (was: )

> Fix note about hadoop dependency in flink-avro
> --
>
> Key: FLINK-19100
> URL: https://issues.apache.org/jira/browse/FLINK-19100
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> The documentation of {{flink-avro}} format claims hadoop is a required 
> dependency the user should provide, which is incorrect. The required 
> dependencies include avro and its transitive dependencies.
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/formats/avro.html
> {code}
> You can download flink-avro from Download, and requires additional Hadoop 
> dependency for cluster execution.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys commented on pull request #13290: [FLINK-19100] Update the avro format in respect to required dependencies

2020-08-31 Thread GitBox


dawidwys commented on pull request #13290:
URL: https://github.com/apache/flink/pull/13290#issuecomment-683908168


   cc @uce @twalthr 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys opened a new pull request #13290: [FLINK-19100] Update the avro format in respect to required dependencies

2020-08-31 Thread GitBox


dawidwys opened a new pull request #13290:
URL: https://github.com/apache/flink/pull/13290


   ## What is the purpose of the change
   
   Correct the required dependencies for avro format. Avro instead of whole 
hadoop.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / **docs** / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-18786) Performance regresssion on 2020.07.29 in some state backend benchmarks

2020-08-31 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan resolved FLINK-18786.
---
Resolution: Fixed

> Performance regresssion on 2020.07.29 in some state backend benchmarks
> --
>
> Key: FLINK-18786
> URL: https://issues.apache.org/jira/browse/FLINK-18786
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks, Runtime / State Backends
>Affects Versions: 1.12.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
> Attachments: statebackend-fs-maybe-regression.png
>
>
> E.g.
> [http://codespeed.dak8s.net:8000/timeline/#/?exe=1,3=stateBackends.FS=2=200=off=on=on]
> [http://codespeed.dak8s.net:8000/timeline/?ben=stateBackends.ROCKS=2]
> [http://codespeed.dak8s.net:8000/timeline/?ben=stateBackends.ROCKS=2#/?exe=1,3=stateBackends.ROCKS_INC=2=200=off=on=on]
> [http://codespeed.dak8s.net:8000/timeline/?ben=slidingWindow=2] 
>  
> See attachment for some hints



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13289: [FLINK-18548][table-planner] support flexible syntax for Temporal table join

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13289:
URL: https://github.com/apache/flink/pull/13289#issuecomment-683894616


   
   ## CI report:
   
   * 0ab49320f3f6b4ea1f2bd4539dd5dc64d98c4dbb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6023)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13289: [FLINK-18548][table-planner] support flexible syntax for Temporal table join

2020-08-31 Thread GitBox


flinkbot commented on pull request #13289:
URL: https://github.com/apache/flink/pull/13289#issuecomment-683894616


   
   ## CI report:
   
   * 0ab49320f3f6b4ea1f2bd4539dd5dc64d98c4dbb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13288: [FLINK-19084] Remove deprecated methods from ExecutionConfig

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13288:
URL: https://github.com/apache/flink/pull/13288#issuecomment-683810782


   
   ## CI report:
   
   * 2205036d657b0eaba00159a0179d355dadce9d17 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6018)
 
   * 40a28918bfb1a79f700fd35a916999d83d36acb8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6020)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13289: [FLINK-18548][table-planner] support flexible syntax for Temporal table join

2020-08-31 Thread GitBox


flinkbot commented on pull request #13289:
URL: https://github.com/apache/flink/pull/13289#issuecomment-683886127


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 0ab49320f3f6b4ea1f2bd4539dd5dc64d98c4dbb (Mon Aug 31 
16:27:27 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13285: [FLINK-19049][table] Fix validation of table functions in projections

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13285:
URL: https://github.com/apache/flink/pull/13285#issuecomment-683744639


   
   ## CI report:
   
   * 402986db7607026718d88e0651e0e9baf16d7e76 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6013)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13271: [FLINK-19043][docs-zh] Translate the 'Logging' page of 'Debugging & Monitoring' into Chinese

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13271:
URL: https://github.com/apache/flink/pull/13271#issuecomment-682323547


   
   ## CI report:
   
   * 077964070424e9463205a8a789cea37142d709bf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5967)
 
   * 64a6d4e0e88b1b73b86a00a0dc7ef7e59e538967 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6022)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang opened a new pull request #13289: [FLINK-18548][table-planner] support flexible syntax for Temporal table join

2020-08-31 Thread GitBox


leonardBang opened a new pull request #13289:
URL: https://github.com/apache/flink/pull/13289


   ## What is the purpose of the change
   
   * This pull request support flexible syntax for Temporal table join. 
   * This PR also can support computed column on temporal table which is a 
project on TableScan.
   
   ## Brief change log
   
- Override `convertFrom()` in `SqlToRelConverter` to support flexible 
`LogicalSnapshot`
   
   ## Verifying this change
   
   Add unit tests and ITCase to cover.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13235: [FLINK-19036][docs-zh] Translate page 'Application Profiling & Debugging' of 'Debugging & Monitoring' into Chinese

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13235:
URL: https://github.com/apache/flink/pull/13235#issuecomment-679625638


   
   ## CI report:
   
   * 96c1c36cfd2ee8926f97dd48fa88d6fe3e0bdaba Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5839)
 
   * 2375ec0fb6b8de2cffbb3fb8af54fe5b701353f4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6021)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13271: [FLINK-19043][docs-zh] Translate the 'Logging' page of 'Debugging & Monitoring' into Chinese

2020-08-31 Thread GitBox


flinkbot edited a comment on pull request #13271:
URL: https://github.com/apache/flink/pull/13271#issuecomment-682323547


   
   ## CI report:
   
   * 077964070424e9463205a8a789cea37142d709bf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5967)
 
   * 64a6d4e0e88b1b73b86a00a0dc7ef7e59e538967 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on a change in pull request #13271: [FLINK-19043][docs-zh] Translate the 'Logging' page of 'Debugging & Monitoring' into Chinese

2020-08-31 Thread GitBox


RocMarshal commented on a change in pull request #13271:
URL: https://github.com/apache/flink/pull/13271#discussion_r480220010



##
File path: docs/monitoring/logging.zh.md
##
@@ -23,47 +23,51 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-The logging in Flink is implemented using the slf4j logging interface. As 
underlying logging framework, log4j2 is used. We also provide logback 
configuration files and pass them to the JVM's as properties. Users willing to 
use logback instead of log4j2 can just exclude log4j2 (or delete it from the 
lib/ folder).
+Flink 中的日志记录是使用 slf4j 日志接口实现的。使用 log4j2 作为底层日志框架。我们也支持了 logback 
日志配置,只要将其配置文件作为参数传递给 JVM 即可。愿意使用 logback 而不是 log4j2 的用户只需排除 log4j2 的依赖(或从 lib/ 
文件夹中删除它)即可。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Configuring Log4j2
+
 
-Log4j2 is controlled using property files. In Flink's case, the file is 
usually called `log4j.properties`. We pass the filename and location of this 
file using the `-Dlog4j.configurationFile=` parameter to the JVM.
+## 配置 Log4j2
 
-Flink ships with the following default properties files:
+Log4j2 是使用配置文件指定的。在 Flink 的使用中,该文件通常命名为 `log4j.properties`。我们使用 
`-Dlog4j.configurationFile=` 参数将该文件的文件名和位置传递给 JVM。
 
-- `log4j-cli.properties`: Used by the Flink command line client (e.g. `flink 
run`) (not code executed on the cluster)
-- `log4j-session.properties`: Used by the Flink command line client when 
starting a YARN or Kubernetes session (`yarn-session.sh`, 
`kubernetes-session.sh`)
-- `log4j.properties`: JobManager/Taskmanager logs (both standalone and YARN)
+Flink 附带以下默认日志配置文件:
 
-### Compatibility with Log4j1
+- `log4j-cli.properties`:由 Flink 命令行客户端使用(例如 `flink run`)(不包括在集群上执行的代码)
+- `log4j-session.properties`:Flink 命令行客户端在启动 YARN 或 Kubernetes session 
时使用(`yarn-session.sh`,`kubernetes-session.sh`)
+- `log4j.properties`:作为 JobManager/TaskManager 日志配置使用(standalone 和 YARN 
两种模式下皆使用)
 
-Flink ships with the [Log4j API 
bridge](https://logging.apache.org/log4j/log4j-2.2/log4j-1.2-api/index.html), 
allowing existing applications that work against Log4j1 classes to continue 
working.
+
 
-If you have custom Log4j1 properties files or code that relies on Log4j1, 
please check out the official Log4j 
[compatibility](https://logging.apache.org/log4j/2.x/manual/compatibility.html) 
and [migration](https://logging.apache.org/log4j/2.x/manual/migration.html) 
guides.
+### 与 Log4j1 的兼容性
 
-## Configuring Log4j1
+Flink 附带了 [Log4j API 
bridge](https://logging.apache.org/log4j/log4j-2.2/log4j-1.2-api/index.html),使得对
 Log4j1 工作的现有应用程序继续工作。
 
-To use Flink with Log4j1 you must ensure that:
-- `org.apache.logging.log4j:log4j-core`, 
`org.apache.logging.log4j:log4j-slf4j-impl` and 
`org.apache.logging.log4j:log4j-1.2-api` are not on the classpath,
-- `log4j:log4j`, `org.slf4j:slf4j-log4j12`, 
`org.apache.logging.log4j:log4j-to-slf4j` and 
`org.apache.logging.log4j:log4j-api` are on the classpath.
+如果你有基于 Log4j1 的自定义配置文件或代码,请查看官方 Log4j 
[兼容性](https://logging.apache.org/log4j/2.x/manual/compatibility.html)和[迁移](https://logging.apache.org/log4j/2.x/manual/migration.html)指南。

Review comment:
   ```suggestion
   如果你有基于 Log4j 的自定义配置文件或代码,请查看官方 Log4j 
[兼容性](https://logging.apache.org/log4j/2.x/manual/compatibility.html)和[迁移](https://logging.apache.org/log4j/2.x/manual/migration.html)指南。
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >