[GitHub] [flink-web] tzulitai closed pull request #379: [release] [statefun] Announcement post and downloads for Stateful Functions v2.2.0

2020-09-27 Thread GitBox


tzulitai closed pull request #379:
URL: https://github.com/apache/flink-web/pull/379


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19408) Update flink-statefun-docker release scripts for cross release Java 8 and 11

2020-09-27 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-19408.
---
Fix Version/s: (was: statefun-2.3.0)
   statefun-2.2.0
   Resolution: Fixed

flink-statefun-docker/master: 6f384d34bc74ae58b9bfb539a3197812f38c7f3c

> Update flink-statefun-docker release scripts for cross release Java 8 and 11
> 
>
> Key: FLINK-19408
> URL: https://issues.apache.org/jira/browse/FLINK-19408
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: statefun-2.2.0
>
>
> Currently, the {{add-version.sh}} script in the {{flink-statefun-docker}} 
> repo does not generate Dockerfiles for different Java versions.
> Since we have decided to cross-release images for Java 8 and 11, that script 
> needs to be updated as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun-docker] tzulitai merged pull request #8: [FLINK-19408] Update add-version.sh for cross-releasing Java 8 and 11

2020-09-27 Thread GitBox


tzulitai merged pull request #8:
URL: https://github.com/apache/flink-statefun-docker/pull/8


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun-docker] tzulitai merged pull request #10: [release] Add Dockerfiles for 2.2.0 release

2020-09-27 Thread GitBox


tzulitai merged pull request #10:
URL: https://github.com/apache/flink-statefun-docker/pull/10


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19432) Whether to capture the updates which don't change any monitored columns

2020-09-27 Thread Zhengchao Shi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203008#comment-17203008
 ] 

Zhengchao Shi commented on FLINK-19432:
---

But sometimes this will cause a lot of "-U" and "+U" output, making the 
downstream do some useless calculations

> Whether to capture the updates which don't change any monitored columns
> ---
>
> Key: FLINK-19432
> URL: https://issues.apache.org/jira/browse/FLINK-19432
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.1
>Reporter: Zhengchao Shi
>Priority: Major
> Fix For: 1.12.0
>
>
> with `debezium-json` and `canal-json`: 
> Whether to capture the updates which don't change any monitored columns. This 
> may happen if the monitored columns (columns defined in Flink SQL DDL) is a 
> subset of the columns in database table.  We can provide an optional option, 
> default 'true', which means all the updates will be captured. You can set to 
> 'false' to only capture changed updates



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19417) Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-19417.
---
Resolution: Fixed

master: 900071bb7a9073f67b8af1097ee858e59626593c

> Fix the bug of the method from_data_stream in table_environement
> 
>
> Key: FLINK-19417
> URL: https://issues.apache.org/jira/browse/FLINK-19417
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> The parameter fields should be str or expression *, not the current list 
> [str]. And the table_env object passed to the Table object should be Python's 
> TableEnvironment, not Java's TableEnvironment



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19417) Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19417:

Affects Version/s: 1.12.0

> Fix the bug of the method from_data_stream in table_environement
> 
>
> Key: FLINK-19417
> URL: https://issues.apache.org/jira/browse/FLINK-19417
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.0
>Reporter: Huang Xingbo
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> The parameter fields should be str or expression *, not the current list 
> [str]. And the table_env object passed to the Table object should be Python's 
> TableEnvironment, not Java's TableEnvironment



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu closed pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


dianfu closed pull request #13491:
URL: https://github.com/apache/flink/pull/13491


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13491:
URL: https://github.com/apache/flink/pull/13491#issuecomment-699620559


   
   ## CI report:
   
   * 0131e7e07336c24798a7d3b6692807c93a96a42c UNKNOWN
   * f998eecfa23b9ed74ac2eb95a4b390d8efb6d849 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7012)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19416) Support Python datetime object in from_collection of Python DataStream

2020-09-27 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202997#comment-17202997
 ] 

Dian Fu commented on FLINK-19416:
-

[~nicholasjiang] Have assigned it to you.

> Support Python datetime object in from_collection of Python DataStream
> --
>
> Key: FLINK-19416
> URL: https://issues.apache.org/jira/browse/FLINK-19416
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: 1.12.0
>
>
> Support Python datetime object in from_collection of Python DataStream



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19416) Support Python datetime object in from_collection of Python DataStream

2020-09-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-19416:
---

Assignee: Nicholas Jiang

> Support Python datetime object in from_collection of Python DataStream
> --
>
> Key: FLINK-19416
> URL: https://issues.apache.org/jira/browse/FLINK-19416
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: 1.12.0
>
>
> Support Python datetime object in from_collection of Python DataStream



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-web] tzulitai commented on pull request #379: [release] [statefun] Announcement post and downloads for Stateful Functions v2.2.0

2020-09-27 Thread GitBox


tzulitai commented on pull request #379:
URL: https://github.com/apache/flink-web/pull/379#issuecomment-699761575


   Thanks a lot for the review and corrections @morsapaes @alpinegizmo!
   I'm merging this now for the announcement along with your fixes.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13495: [Test]Fix case demo is more obvious to understand for ReinterpretAsKeyedStream

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13495:
URL: https://github.com/apache/flink/pull/13495#issuecomment-699753576


   
   ## CI report:
   
   * b64812a0cb08a25ee0853c57230ffe12ccc9ceb3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7014)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13495: [Test]Fix case demo is more obvious to understand for ReinterpretAsKeyedStream

2020-09-27 Thread GitBox


flinkbot commented on pull request #13495:
URL: https://github.com/apache/flink/pull/13495#issuecomment-699753576


   
   ## CI report:
   
   * b64812a0cb08a25ee0853c57230ffe12ccc9ceb3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13494: [kafka connector]Fix cast question for properies() method in kafka ConnectorDescriptor

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13494:
URL: https://github.com/apache/flink/pull/13494#issuecomment-699748753


   
   ## CI report:
   
   * b139f0662c5c86ae97742c6dc6742aaef4f97a97 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7013)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13495: [Test]Fix case demo is more obvious to understand for ReinterpretAsKeyedStream

2020-09-27 Thread GitBox


flinkbot commented on pull request #13495:
URL: https://github.com/apache/flink/pull/13495#issuecomment-699750515


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b64812a0cb08a25ee0853c57230ffe12ccc9ceb3 (Mon Sep 28 
03:33:41 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hehuiyuan opened a new pull request #13495: Fix case demo is more obvious to understand for ReinterpretAsKeyedStream

2020-09-27 Thread GitBox


hehuiyuan opened a new pull request #13495:
URL: https://github.com/apache/flink/pull/13495


   
   ## What is the purpose of the change
   
   To make case demo more obvious.
   Change
   
   
![image](https://user-images.githubusercontent.com/18002496/94387913-2567d180-017e-11eb-9adb-44e3167cca38.png)
   
   To
   
   
![image](https://user-images.githubusercontent.com/18002496/94387868-00735e80-017e-11eb-8529-f6af47bf2c79.png)
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19432) Whether to capture the updates which don't change any monitored columns

2020-09-27 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202984#comment-17202984
 ] 

Benchao Li commented on FLINK-19432:


[~tinny] IMHO, this may introduce some overhead in the format. If it does not 
have critical impact I prefer to keep as it is.
CC [~jark], WDYT?

> Whether to capture the updates which don't change any monitored columns
> ---
>
> Key: FLINK-19432
> URL: https://issues.apache.org/jira/browse/FLINK-19432
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.1
>Reporter: Zhengchao Shi
>Priority: Major
> Fix For: 1.12.0
>
>
> with `debezium-json` and `canal-json`: 
> Whether to capture the updates which don't change any monitored columns. This 
> may happen if the monitored columns (columns defined in Flink SQL DDL) is a 
> subset of the columns in database table.  We can provide an optional option, 
> default 'true', which means all the updates will be captured. You can set to 
> 'false' to only capture changed updates



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13494: [kafka connector]Fix cast question for properies() method in kafka ConnectorDescriptor

2020-09-27 Thread GitBox


flinkbot commented on pull request #13494:
URL: https://github.com/apache/flink/pull/13494#issuecomment-699748753


   
   ## CI report:
   
   * b139f0662c5c86ae97742c6dc6742aaef4f97a97 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18112) Approximate Task-Local Recovery -- Milestone One

2020-09-27 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-18112:
-
Description: 
This is the Jira ticket for Milestone One of [FLIP-135 Approximate Task-Local 
Recovery|https://cwiki.apache.org/confluence/display/FLINK/FLIP-135+Approximate+Task-Local+Recovery]

In short, in Approximate Task-Local Recovery, if a task fails, only the failed 
task restarts without affecting the rest of the job. To ease discussion, we 
divide the problem of approximate task-local recovery into three parts with 
each part only focusing on addressing a set of problems. This Jira ticket 
focuses on address the first milestone.

Milestone One: sink recovery. Here a sink task stands for no consumers reading 
data from it. In this scenario, if a sink vertex fails, the sink is restarted 
from the last successfully completed checkpoint and data loss is expected. If a 
non-sink vertex fails, a regional failover strategy takes place. In milestone 
one, we focus on issues related to task failure handling and upstream 
reconnection.

 

Milestone one includes two parts of change:

*Part 1*: Network Part: how the failed task able to link to the upstream 
Result(Sub)Partitions, and continue processing data

*Part 2*: Scheduling part, a new failover strategy to restart the sink only 
when the sink fails.

 

  was:
This is the Jira ticket for Milestone One of [FLIP-135 Approximate Task-Local 
Recovery|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-135+Approximate+Task-Local+Recovery].]

In short, in Approximate Task-Local Recovery, if a task fails, only the failed 
task restarts without affecting the rest of the job. To ease discussion, we 
divide the problem of approximate task-local recovery into three parts with 
each part only focusing on addressing a set of problems. This Jira ticket 
focuses on address the first milestone.

Milestone One: sink recovery. Here a sink task stands for no consumers reading 
data from it. In this scenario, if a sink vertex fails, the sink is restarted 
from the last successfully completed checkpoint and data loss is expected. If a 
non-sink vertex fails, a regional failover strategy takes place. In milestone 
one, we focus on issues related to task failure handling and upstream 
reconnection.

 

Milestone one includes two parts of change:

*Part 1*: Network Part: how the failed task able to link to the upstream 
Result(Sub)Partitions, and continue processing data

*Part 2*: Scheduling part, a new failover strategy to restart the sink only 
when the sink fails.

 


> Approximate Task-Local Recovery -- Milestone One
> 
>
> Key: FLINK-18112
> URL: https://issues.apache.org/jira/browse/FLINK-18112
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing, Runtime / Coordination, Runtime 
> / Network
>Affects Versions: 1.12.0
>Reporter: Yuan Mei
>Assignee: Yuan Mei
>Priority: Major
> Fix For: 1.12.0
>
>
> This is the Jira ticket for Milestone One of [FLIP-135 Approximate Task-Local 
> Recovery|https://cwiki.apache.org/confluence/display/FLINK/FLIP-135+Approximate+Task-Local+Recovery]
> In short, in Approximate Task-Local Recovery, if a task fails, only the 
> failed task restarts without affecting the rest of the job. To ease 
> discussion, we divide the problem of approximate task-local recovery into 
> three parts with each part only focusing on addressing a set of problems. 
> This Jira ticket focuses on address the first milestone.
> Milestone One: sink recovery. Here a sink task stands for no consumers 
> reading data from it. In this scenario, if a sink vertex fails, the sink is 
> restarted from the last successfully completed checkpoint and data loss is 
> expected. If a non-sink vertex fails, a regional failover strategy takes 
> place. In milestone one, we focus on issues related to task failure handling 
> and upstream reconnection.
>  
> Milestone one includes two parts of change:
> *Part 1*: Network Part: how the failed task able to link to the upstream 
> Result(Sub)Partitions, and continue processing data
> *Part 2*: Scheduling part, a new failover strategy to restart the sink only 
> when the sink fails.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18112) Approximate Task-Local Recovery -- Milestone One

2020-09-27 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-18112:
-
Description: 
This is the Jira ticket for Milestone One of [FLIP-135 Approximate Task-Local 
Recovery|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-135+Approximate+Task-Local+Recovery].]

In short, in Approximate Task-Local Recovery, if a task fails, only the failed 
task restarts without affecting the rest of the job. To ease discussion, we 
divide the problem of approximate task-local recovery into three parts with 
each part only focusing on addressing a set of problems. This Jira ticket 
focuses on address the first milestone.

Milestone One: sink recovery. Here a sink task stands for no consumers reading 
data from it. In this scenario, if a sink vertex fails, the sink is restarted 
from the last successfully completed checkpoint and data loss is expected. If a 
non-sink vertex fails, a regional failover strategy takes place. In milestone 
one, we focus on issues related to task failure handling and upstream 
reconnection.

 

Milestone one includes two parts of change:

*Part 1*: Network Part: how the failed task able to link to the upstream 
Result(Sub)Partitions, and continue processing data

*Part 2*: Scheduling part, a new failover strategy to restart the sink only 
when the sink fails.

 

  was:
Build a prototype of single task failure recovery to address and answer the 
following questions:

*Step 1*: Scheduling part, restart a single node without restarting the 
upstream or downstream nodes.

*Step 2*: Checkpointing part, as my understanding of how regional failover 
works, this part might not need modification.

*Step 3*: Network part

  - how the recovered node able to link to the upstream ResultPartitions, and 
continue getting data

  - how the downstream node able to link to the recovered node, and continue 
getting node

  - how different netty transit mode affects the results

  - what if the failed node buffered data pool is full

*Step 4*: Failover process verification


> Approximate Task-Local Recovery -- Milestone One
> 
>
> Key: FLINK-18112
> URL: https://issues.apache.org/jira/browse/FLINK-18112
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing, Runtime / Coordination, Runtime 
> / Network
>Affects Versions: 1.12.0
>Reporter: Yuan Mei
>Assignee: Yuan Mei
>Priority: Major
> Fix For: 1.12.0
>
>
> This is the Jira ticket for Milestone One of [FLIP-135 Approximate Task-Local 
> Recovery|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-135+Approximate+Task-Local+Recovery].]
> In short, in Approximate Task-Local Recovery, if a task fails, only the 
> failed task restarts without affecting the rest of the job. To ease 
> discussion, we divide the problem of approximate task-local recovery into 
> three parts with each part only focusing on addressing a set of problems. 
> This Jira ticket focuses on address the first milestone.
> Milestone One: sink recovery. Here a sink task stands for no consumers 
> reading data from it. In this scenario, if a sink vertex fails, the sink is 
> restarted from the last successfully completed checkpoint and data loss is 
> expected. If a non-sink vertex fails, a regional failover strategy takes 
> place. In milestone one, we focus on issues related to task failure handling 
> and upstream reconnection.
>  
> Milestone one includes two parts of change:
> *Part 1*: Network Part: how the failed task able to link to the upstream 
> Result(Sub)Partitions, and continue processing data
> *Part 2*: Scheduling part, a new failover strategy to restart the sink only 
> when the sink fails.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo commented on pull request #13492: [FLINK-19181][python] Make python processes respect the calculated managed memory fraction

2020-09-27 Thread GitBox


HuangXingBo commented on pull request #13492:
URL: https://github.com/apache/flink/pull/13492#issuecomment-699746710


   @dianfu The memory configuration shown in the python udf related documents 
is as follows 
   
`table_env.get_config().get_configuration().set_string("taskmanager.memory.task.off-heap.size",
 '80m')`
   which I think can be removed together.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo commented on a change in pull request #13492: [FLINK-19181][python] Make python processes respect the calculated managed memory fraction

2020-09-27 Thread GitBox


HuangXingBo commented on a change in pull request #13492:
URL: https://github.com/apache/flink/pull/13492#discussion_r495662360



##
File path: flink-python/src/main/java/org/apache/flink/python/PythonOptions.java
##
@@ -156,7 +156,7 @@
 */
public static final ConfigOption USE_MANAGED_MEMORY = 
ConfigOptions
.key("python.fn-execution.memory.managed")
-   .defaultValue(false)
+   .defaultValue(true)

Review comment:
   We need to also change the default value of 
`python.fn-execution.memory.managed` in `python_configuration.html`
   

##
File path: flink-python/pyflink/fn_execution/beam/beam_sdk_worker_main.py
##
@@ -23,10 +25,34 @@
 except ImportError:
 import pyflink.fn_execution.beam.beam_operations_slow
 
+# resource is only available in Unix
+try:
+import resource
+has_resource_module = True
+except ImportError:
+has_resource_module = False
+
 # force to register the coders to SDK Harness
 import pyflink.fn_execution.beam.beam_coders # noqa # pylint: 
disable=unused-import
 
 import apache_beam.runners.worker.sdk_worker_main
 
+
+def set_memory_limit():
+memory_limit = int(os.environ.get('_PYTHON_WORKER_MEMORY_LIMIT', "-1"))
+if memory_limit > 0 and has_resource_module:

Review comment:
   For the case that `has_resource_module is false`, but `memory_limit>0` 
is set, can we logging a warning.
   

##
File path: 
flink-python/src/main/java/org/apache/flink/streaming/api/runners/python/beam/BeamPythonFunctionRunner.java
##
@@ -235,6 +274,14 @@ public void close() throws Exception {
jobBundleFactory = null;
}
 
+   try {
+   if (sharedResources != null) {
+sharedResources.close();

Review comment:
   ```suggestion
sharedResources.close();
   ```

##
File path: 
flink-python/src/main/java/org/apache/flink/streaming/api/runners/python/beam/BeamPythonFunctionRunner.java
##
@@ -117,13 +120,19 @@
 
private transient boolean bundleStarted;
 
+   private static final String MANAGED_MEMORY_RESOURCE_ID = 
"python-process-managed-memory";

Review comment:
   Move the declaration of the variables `MANAGED_MEMORY_RESOURCE_ID` and 
`PYTHON_WORKER_MEMORY_LIMIT` to the front of `bundleStarted` ?
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13494: Fix cast question for properies() method in kafka ConnectorDescriptor

2020-09-27 Thread GitBox


flinkbot commented on pull request #13494:
URL: https://github.com/apache/flink/pull/13494#issuecomment-699743921


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b139f0662c5c86ae97742c6dc6742aaef4f97a97 (Mon Sep 28 
03:05:52 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13491:
URL: https://github.com/apache/flink/pull/13491#issuecomment-699620559


   
   ## CI report:
   
   * 0131e7e07336c24798a7d3b6692807c93a96a42c UNKNOWN
   * 20a9a58f2aa694681754572c716ee0b408484315 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7004)
 
   * f998eecfa23b9ed74ac2eb95a4b390d8efb6d849 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7012)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13482: test

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13482:
URL: https://github.com/apache/flink/pull/13482#issuecomment-698903502


   
   ## CI report:
   
   * 5fb5255b9edc3cd74b836a89489a0a81591a514f Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6999)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7010)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7002)
 
   * 81efe483e7dbcff7e365b535e2dcae6153f217a8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7011)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18112) Approximate Task-Local Recovery -- Milestone One

2020-09-27 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-18112:
-
Summary: Approximate Task-Local Recovery -- Milestone One  (was: 
Approximate Task-Local Recovery)

> Approximate Task-Local Recovery -- Milestone One
> 
>
> Key: FLINK-18112
> URL: https://issues.apache.org/jira/browse/FLINK-18112
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing, Runtime / Coordination, Runtime 
> / Network
>Affects Versions: 1.12.0
>Reporter: Yuan Mei
>Assignee: Yuan Mei
>Priority: Major
> Fix For: 1.12.0
>
>
> Build a prototype of single task failure recovery to address and answer the 
> following questions:
> *Step 1*: Scheduling part, restart a single node without restarting the 
> upstream or downstream nodes.
> *Step 2*: Checkpointing part, as my understanding of how regional failover 
> works, this part might not need modification.
> *Step 3*: Network part
>   - how the recovered node able to link to the upstream ResultPartitions, and 
> continue getting data
>   - how the downstream node able to link to the recovered node, and continue 
> getting node
>   - how different netty transit mode affects the results
>   - what if the failed node buffered data pool is full
> *Step 4*: Failover process verification



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18112) Approximate Task-Local Recovery

2020-09-27 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-18112:
-
Summary: Approximate Task-Local Recovery  (was: Single Task Approximate 
Failure Recovery)

> Approximate Task-Local Recovery
> ---
>
> Key: FLINK-18112
> URL: https://issues.apache.org/jira/browse/FLINK-18112
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing, Runtime / Coordination, Runtime 
> / Network
>Affects Versions: 1.12.0
>Reporter: Yuan Mei
>Assignee: Yuan Mei
>Priority: Major
> Fix For: 1.12.0
>
>
> Build a prototype of single task failure recovery to address and answer the 
> following questions:
> *Step 1*: Scheduling part, restart a single node without restarting the 
> upstream or downstream nodes.
> *Step 2*: Checkpointing part, as my understanding of how regional failover 
> works, this part might not need modification.
> *Step 3*: Network part
>   - how the recovered node able to link to the upstream ResultPartitions, and 
> continue getting data
>   - how the downstream node able to link to the recovered node, and continue 
> getting node
>   - how different netty transit mode affects the results
>   - what if the failed node buffered data pool is full
> *Step 4*: Failover process verification



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hehuiyuan opened a new pull request #13494: Fix cast question for properies() method in kafka ConnectorDescriptor

2020-09-27 Thread GitBox


hehuiyuan opened a new pull request #13494:
URL: https://github.com/apache/flink/pull/13494


   
   ## What is the purpose of the change
   
   This pull resquest fixes Kafka connector. There is a cast  problem when use 
properties method.
   
```
  Properties props = new Properties();
   props.put( "enable.auto.commit", "false");
   props.put( "fetch.max.wait.ms", "3000");
   props.put("flink.poll-timeout", 5000);
   props.put( "flink.partition-discovery.interval-millis", false);
   
  kafka = new Kafka()
   .version("0.11")
   .topic(topic)
   .properties(props);
   ```
   
   ```
   Exception in thread "main" java.lang.ClassCastException: java.lang.Integer 
cannot be cast to java.lang.String
   
   Exception in thread "main" java.lang.ClassCastException: java.lang.Boolean 
cannot be cast to java.lang.String
   ```
   
   ## Brief change log
   
 - *change  (String) v  > String.valueOf() in Kafka.java*
   
   
   
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18112) Single Task Approximate Failure Recovery

2020-09-27 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-18112:
-
Summary: Single Task Approximate Failure Recovery  (was: Single Task 
Failure Recovery Prototype)

> Single Task Approximate Failure Recovery
> 
>
> Key: FLINK-18112
> URL: https://issues.apache.org/jira/browse/FLINK-18112
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing, Runtime / Coordination, Runtime 
> / Network
>Affects Versions: 1.12.0
>Reporter: Yuan Mei
>Assignee: Yuan Mei
>Priority: Major
> Fix For: 1.12.0
>
>
> Build a prototype of single task failure recovery to address and answer the 
> following questions:
> *Step 1*: Scheduling part, restart a single node without restarting the 
> upstream or downstream nodes.
> *Step 2*: Checkpointing part, as my understanding of how regional failover 
> works, this part might not need modification.
> *Step 3*: Network part
>   - how the recovered node able to link to the upstream ResultPartitions, and 
> continue getting data
>   - how the downstream node able to link to the recovered node, and continue 
> getting node
>   - how different netty transit mode affects the results
>   - what if the failed node buffered data pool is full
> *Step 4*: Failover process verification



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 commented on a change in pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-27 Thread GitBox


wangyang0918 commented on a change in pull request #13322:
URL: https://github.com/apache/flink/pull/13322#discussion_r495664630



##
File path: 
flink-end-to-end-tests/test-scripts/test_kubernetes_pyflink_application.sh
##
@@ -0,0 +1,83 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+source "$(dirname "$0")"/common_kubernetes.sh
+
+CURRENT_DIR=`cd "$(dirname "$0")" && pwd -P`
+CLUSTER_ROLE_BINDING="flink-role-binding-default"
+CLUSTER_ID="flink-native-k8s-pyflink-application-1"
+PURE_FLINK_IMAGE_NAME="test_kubernetes_application"
+PYFLINK_IMAGE_NAME="test_kubernetes_pyflink_application"
+LOCAL_LOGS_PATH="${TEST_DATA_DIR}/log"
+
+function internal_cleanup {
+kubectl delete deployment ${CLUSTER_ID}
+kubectl delete clusterrolebinding ${CLUSTER_ROLE_BINDING}
+}
+
+start_kubernetes
+
+build_image ${PURE_FLINK_IMAGE_NAME}
+
+# Build PyFlink wheel package

Review comment:
   We have to make sure that this serious of commands to build an image for 
python is stable. Otherwise, the devs will suffer a lot from this failure.
   
   Do you know how long it will take to run this E2E test?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13482: test

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13482:
URL: https://github.com/apache/flink/pull/13482#issuecomment-698903502


   
   ## CI report:
   
   * 5fb5255b9edc3cd74b836a89489a0a81591a514f Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6999)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7010)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7002)
 
   * 81efe483e7dbcff7e365b535e2dcae6153f217a8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-15578) Implement exactly-once JDBC sink

2020-09-27 Thread Kenzyme Le (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202969#comment-17202969
 ] 

Kenzyme Le commented on FLINK-15578:


Great! Thanks for the update.

> Implement exactly-once JDBC sink
> 
>
> Key: FLINK-15578
> URL: https://issues.apache.org/jira/browse/FLINK-15578
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As per discussion in the dev mailing list, there are two options:
>  # Write-ahead log
>  # Two-phase commit (XA)
> the latter being preferable.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19432) Whether to capture the updates which don't change any monitored columns

2020-09-27 Thread Zhengchao Shi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202968#comment-17202968
 ] 

Zhengchao Shi commented on FLINK-19432:
---

[~libenchao] What do you think about this improvement in "debezium-json", 

> Whether to capture the updates which don't change any monitored columns
> ---
>
> Key: FLINK-19432
> URL: https://issues.apache.org/jira/browse/FLINK-19432
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.1
>Reporter: Zhengchao Shi
>Priority: Major
> Fix For: 1.12.0
>
>
> with `debezium-json` and `canal-json`: 
> Whether to capture the updates which don't change any monitored columns. This 
> may happen if the monitored columns (columns defined in Flink SQL DDL) is a 
> subset of the columns in database table.  We can provide an optional option, 
> default 'true', which means all the updates will be captured. You can set to 
> 'false' to only capture changed updates



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo commented on pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


HuangXingBo commented on pull request #13491:
URL: https://github.com/apache/flink/pull/13491#issuecomment-699735286


   @SteNicholas Thanks a lot for the update. LGTM.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13482: test

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13482:
URL: https://github.com/apache/flink/pull/13482#issuecomment-698903502


   
   ## CI report:
   
   * 5fb5255b9edc3cd74b836a89489a0a81591a514f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7002)
 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6999)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7010)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19438) Queryable State needs to support both read-uncommitted and read-committed

2020-09-27 Thread sheep (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sheep updated FLINK-19438:
--
Summary: Queryable State needs to support both read-uncommitted and 
read-committed   (was: Queryable State need implement both read-uncommitted and 
read-committed )

> Queryable State needs to support both read-uncommitted and read-committed 
> --
>
> Key: FLINK-19438
> URL: https://issues.apache.org/jira/browse/FLINK-19438
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Queryable State
>Reporter: sheep
>Priority: Major
>
> Flink exposes its managed keyed (partitioned) state to the outside world and 
> allows the user to query a job’s state from outside Flink. From a traditional 
> database isolation-level viewpoint, the queries access uncommitted state, 
> thus following the read-uncommitted isolation level.
> I fully understand Flink provides read-uncommitted state query in order to 
> query real-time state. But the read-committed state is also important (I 
> cannot fully explain). From Flink 1.9, querying even modifying the state in 
> Checkpoint has been implemented. The state in Checkpoint is equivalent to 
> read-committed state.  So, users can query read-committed state via the state 
> processor api.
> *Flink should provide users  for configuration of isolation level of 
> Queryable State by integration of the two levels of state query.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19438) Queryable State need implement both read-uncommitted and read-committed

2020-09-27 Thread sheep (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sheep updated FLINK-19438:
--
Summary: Queryable State need implement both read-uncommitted and 
read-committed   (was: Queryable State need implement both read-uncommitted and 
read-committed)

> Queryable State need implement both read-uncommitted and read-committed 
> 
>
> Key: FLINK-19438
> URL: https://issues.apache.org/jira/browse/FLINK-19438
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Queryable State
>Reporter: sheep
>Priority: Major
>
> Flink exposes its managed keyed (partitioned) state to the outside world and 
> allows the user to query a job’s state from outside Flink. From a traditional 
> database isolation-level viewpoint, the queries access uncommitted state, 
> thus following the read-uncommitted isolation level.
> I fully understand Flink provides read-uncommitted state query in order to 
> query real-time state. But the read-committed state is also important (I 
> cannot fully explain). From Flink 1.9, querying even modifying the state in 
> Checkpoint has been implemented. The state in Checkpoint is equivalent to 
> read-committed state.  So, users can query read-committed state via the state 
> processor api.
> *Flink should provide users  for configuration of isolation level of 
> Queryable State by integration of the two levels of state query.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19438) Queryable State need implement both read-uncommitted and read-committed

2020-09-27 Thread sheep (Jira)
sheep created FLINK-19438:
-

 Summary: Queryable State need implement both read-uncommitted and 
read-committed
 Key: FLINK-19438
 URL: https://issues.apache.org/jira/browse/FLINK-19438
 Project: Flink
  Issue Type: Wish
  Components: Runtime / Queryable State
Reporter: sheep


Flink exposes its managed keyed (partitioned) state to the outside world and 
allows the user to query a job’s state from outside Flink. From a traditional 
database isolation-level viewpoint, the queries access uncommitted state, thus 
following the read-uncommitted isolation level.

I fully understand Flink provides read-uncommitted state query in order to 
query real-time state. But the read-committed state is also important (I cannot 
fully explain). From Flink 1.9, querying even modifying the state in Checkpoint 
has been implemented. The state in Checkpoint is equivalent to read-committed 
state.  So, users can query read-committed state via the state processor api.

*Flink should provide users  for configuration of isolation level of Queryable 
State by integration of the two levels of state query.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on pull request #13482: test

2020-09-27 Thread GitBox


zhijiangW commented on pull request #13482:
URL: https://github.com/apache/flink/pull/13482#issuecomment-699733170


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13491:
URL: https://github.com/apache/flink/pull/13491#issuecomment-699620559


   
   ## CI report:
   
   * 0131e7e07336c24798a7d3b6692807c93a96a42c UNKNOWN
   * 20a9a58f2aa694681754572c716ee0b408484315 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7004)
 
   * f998eecfa23b9ed74ac2eb95a4b390d8efb6d849 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19416) Support Python datetime object in from_collection of Python DataStream

2020-09-27 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202951#comment-17202951
 ] 

Huang Xingbo commented on FLINK-19416:
--

[~nicholasjiang] Thanks a lot. [~dianfu] Could you help assign this JIRA to 
[~nicholasjiang].

> Support Python datetime object in from_collection of Python DataStream
> --
>
> Key: FLINK-19416
> URL: https://issues.apache.org/jira/browse/FLINK-19416
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Priority: Major
> Fix For: 1.12.0
>
>
> Support Python datetime object in from_collection of Python DataStream



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu commented on pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


dianfu commented on pull request #13491:
URL: https://github.com/apache/flink/pull/13491#issuecomment-699719124


   @SteNicholas Thanks for the update. LGTM. There are check style issues. 
Could you take a look at?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19437) FileSourceTextLinesITCase.testContinuousTextFileSource failed with "SimpleStreamFormat is not splittable, but found split end (0) different from file length (198)"

2020-09-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19437:

Labels: test-stability  (was: )

> FileSourceTextLinesITCase.testContinuousTextFileSource failed with 
> "SimpleStreamFormat is not splittable, but found split end (0) different from 
> file length (198)"
> ---
>
> Key: FLINK-19437
> URL: https://issues.apache.org/jira/browse/FLINK-19437
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7008=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf
> {code}
> 2020-09-27T21:58:38.9199090Z [ERROR] 
> testContinuousTextFileSource(org.apache.flink.connector.file.src.FileSourceTextLinesITCase)
>   Time elapsed: 0.517 s  <<< ERROR!
> 2020-09-27T21:58:38.9199619Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2020-09-27T21:58:38.9200118Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> 2020-09-27T21:58:38.9200722Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:77)
> 2020-09-27T21:58:38.9201290Z  at 
> org.apache.flink.streaming.api.datastream.DataStreamUtils.collectRecordsFromUnboundedStream(DataStreamUtils.java:150)
> 2020-09-27T21:58:38.9201920Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testContinuousTextFileSource(FileSourceTextLinesITCase.java:136)
> 2020-09-27T21:58:38.9202570Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-09-27T21:58:38.9203054Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-09-27T21:58:38.9203539Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-09-27T21:58:38.9203968Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-09-27T21:58:38.9204369Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-09-27T21:58:38.9204844Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-09-27T21:58:38.9205359Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-09-27T21:58:38.9205814Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-09-27T21:58:38.9206240Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-09-27T21:58:38.9206611Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-09-27T21:58:38.9206971Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-09-27T21:58:38.9207404Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-09-27T21:58:38.9207971Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-09-27T21:58:38.9208404Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-09-27T21:58:38.9208877Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-09-27T21:58:38.9209279Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-09-27T21:58:38.9209680Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-09-27T21:58:38.9210064Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-09-27T21:58:38.9210476Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-09-27T21:58:38.9210881Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-09-27T21:58:38.9211272Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-09-27T21:58:38.9211638Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-09-27T21:58:38.9212305Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-09-27T21:58:38.9213157Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-09-27T21:58:38.9213663Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-09-27T21:58:38.9214123Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-09-27T21:58:38.9214620Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-09-27T21:58:38.9215148Z  at 
> 

[jira] [Commented] (FLINK-18818) HadoopRenameCommitterHDFSTest.testCommitOneFile[Override: false] failed with "java.io.IOException: The stream is closed"

2020-09-27 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202949#comment-17202949
 ] 

Dian Fu commented on FLINK-18818:
-

HadoopRenameCommitterHDFSTest.testCommitMultipleFilesMixed has also failed with 
the same error: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7008=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=420bd9ec-164e-562e-8947-0dacde3cec91

> HadoopRenameCommitterHDFSTest.testCommitOneFile[Override: false] failed with 
> "java.io.IOException: The stream is closed"
> 
>
> Key: FLINK-18818
> URL: https://issues.apache.org/jira/browse/FLINK-18818
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=5177=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=420bd9ec-164e-562e-8947-0dacde3cec91
> {code}
> 2020-08-04T20:56:51.1835382Z [ERROR] testCommitOneFile[Override: 
> false](org.apache.flink.formats.hadoop.bulk.committer.HadoopRenameCommitterHDFSTest)
>   Time elapsed: 0.046 s  <<< ERROR!
> 2020-08-04T20:56:51.1835950Z java.io.IOException: The stream is closed
> 2020-08-04T20:56:51.1836413Z  at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
> 2020-08-04T20:56:51.1836867Z  at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> 2020-08-04T20:56:51.1837313Z  at 
> java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
> 2020-08-04T20:56:51.1837712Z  at 
> java.io.DataOutputStream.flush(DataOutputStream.java:123)
> 2020-08-04T20:56:51.1838116Z  at 
> java.io.FilterOutputStream.close(FilterOutputStream.java:158)
> 2020-08-04T20:56:51.1838527Z  at 
> org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:987)
> 2020-08-04T20:56:51.1838974Z  at 
> org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:839)
> 2020-08-04T20:56:51.1839404Z  at 
> org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:834)
> 2020-08-04T20:56:51.1839775Z  Suppressed: java.io.IOException: The stream is 
> closed
> 2020-08-04T20:56:51.1840184Z  at 
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
> 2020-08-04T20:56:51.1840641Z  at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> 2020-08-04T20:56:51.1841087Z  at 
> java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
> 2020-08-04T20:56:51.1841509Z  at 
> java.io.FilterOutputStream.close(FilterOutputStream.java:158)
> 2020-08-04T20:56:51.1841910Z  at 
> java.io.FilterOutputStream.close(FilterOutputStream.java:159)
> 2020-08-04T20:56:51.1842207Z  ... 3 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19437) FileSourceTextLinesITCase.testContinuousTextFileSource failed with "SimpleStreamFormat is not splittable, but found split end (0) different from file length (198)"

2020-09-27 Thread Dian Fu (Jira)
Dian Fu created FLINK-19437:
---

 Summary: FileSourceTextLinesITCase.testContinuousTextFileSource 
failed with "SimpleStreamFormat is not splittable, but found split end (0) 
different from file length (198)"
 Key: FLINK-19437
 URL: https://issues.apache.org/jira/browse/FLINK-19437
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Affects Versions: 1.12.0
Reporter: Dian Fu


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7008=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf

{code}
2020-09-27T21:58:38.9199090Z [ERROR] 
testContinuousTextFileSource(org.apache.flink.connector.file.src.FileSourceTextLinesITCase)
  Time elapsed: 0.517 s  <<< ERROR!
2020-09-27T21:58:38.9199619Z java.lang.RuntimeException: Failed to fetch next 
result
2020-09-27T21:58:38.9200118Zat 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
2020-09-27T21:58:38.9200722Zat 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:77)
2020-09-27T21:58:38.9201290Zat 
org.apache.flink.streaming.api.datastream.DataStreamUtils.collectRecordsFromUnboundedStream(DataStreamUtils.java:150)
2020-09-27T21:58:38.9201920Zat 
org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testContinuousTextFileSource(FileSourceTextLinesITCase.java:136)
2020-09-27T21:58:38.9202570Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-09-27T21:58:38.9203054Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-09-27T21:58:38.9203539Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-09-27T21:58:38.9203968Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-09-27T21:58:38.9204369Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-09-27T21:58:38.9204844Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-09-27T21:58:38.9205359Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-09-27T21:58:38.9205814Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-09-27T21:58:38.9206240Zat 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
2020-09-27T21:58:38.9206611Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-09-27T21:58:38.9206971Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2020-09-27T21:58:38.9207404Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2020-09-27T21:58:38.9207971Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2020-09-27T21:58:38.9208404Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2020-09-27T21:58:38.9208877Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2020-09-27T21:58:38.9209279Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2020-09-27T21:58:38.9209680Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2020-09-27T21:58:38.9210064Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2020-09-27T21:58:38.9210476Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-09-27T21:58:38.9210881Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-09-27T21:58:38.9211272Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-09-27T21:58:38.9211638Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2020-09-27T21:58:38.9212305Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
2020-09-27T21:58:38.9213157Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
2020-09-27T21:58:38.9213663Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
2020-09-27T21:58:38.9214123Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
2020-09-27T21:58:38.9214620Zat 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
2020-09-27T21:58:38.9215148Zat 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
2020-09-27T21:58:38.9215650Zat 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
2020-09-27T21:58:38.9216095Zat 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
2020-09-27T21:58:38.9216516Z Caused by: java.io.IOException: Failed to fetch 
job execution result
2020-09-27T21:58:38.9217004Zat 

[jira] [Updated] (FLINK-19437) FileSourceTextLinesITCase.testContinuousTextFileSource failed with "SimpleStreamFormat is not splittable, but found split end (0) different from file length (198)"

2020-09-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19437:

Component/s: Tests

> FileSourceTextLinesITCase.testContinuousTextFileSource failed with 
> "SimpleStreamFormat is not splittable, but found split end (0) different from 
> file length (198)"
> ---
>
> Key: FLINK-19437
> URL: https://issues.apache.org/jira/browse/FLINK-19437
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7008=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf
> {code}
> 2020-09-27T21:58:38.9199090Z [ERROR] 
> testContinuousTextFileSource(org.apache.flink.connector.file.src.FileSourceTextLinesITCase)
>   Time elapsed: 0.517 s  <<< ERROR!
> 2020-09-27T21:58:38.9199619Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2020-09-27T21:58:38.9200118Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> 2020-09-27T21:58:38.9200722Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:77)
> 2020-09-27T21:58:38.9201290Z  at 
> org.apache.flink.streaming.api.datastream.DataStreamUtils.collectRecordsFromUnboundedStream(DataStreamUtils.java:150)
> 2020-09-27T21:58:38.9201920Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testContinuousTextFileSource(FileSourceTextLinesITCase.java:136)
> 2020-09-27T21:58:38.9202570Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-09-27T21:58:38.9203054Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-09-27T21:58:38.9203539Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-09-27T21:58:38.9203968Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-09-27T21:58:38.9204369Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-09-27T21:58:38.9204844Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-09-27T21:58:38.9205359Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-09-27T21:58:38.9205814Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-09-27T21:58:38.9206240Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-09-27T21:58:38.9206611Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-09-27T21:58:38.9206971Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-09-27T21:58:38.9207404Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-09-27T21:58:38.9207971Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-09-27T21:58:38.9208404Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-09-27T21:58:38.9208877Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-09-27T21:58:38.9209279Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-09-27T21:58:38.9209680Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-09-27T21:58:38.9210064Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-09-27T21:58:38.9210476Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-09-27T21:58:38.9210881Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-09-27T21:58:38.9211272Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-09-27T21:58:38.9211638Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-09-27T21:58:38.9212305Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-09-27T21:58:38.9213157Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-09-27T21:58:38.9213663Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-09-27T21:58:38.9214123Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-09-27T21:58:38.9214620Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-09-27T21:58:38.9215148Z  at 
> 

[jira] [Commented] (FLINK-19437) FileSourceTextLinesITCase.testContinuousTextFileSource failed with "SimpleStreamFormat is not splittable, but found split end (0) different from file length (198)"

2020-09-27 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202948#comment-17202948
 ] 

Dian Fu commented on FLINK-19437:
-

cc [~sewen]

> FileSourceTextLinesITCase.testContinuousTextFileSource failed with 
> "SimpleStreamFormat is not splittable, but found split end (0) different from 
> file length (198)"
> ---
>
> Key: FLINK-19437
> URL: https://issues.apache.org/jira/browse/FLINK-19437
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7008=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf
> {code}
> 2020-09-27T21:58:38.9199090Z [ERROR] 
> testContinuousTextFileSource(org.apache.flink.connector.file.src.FileSourceTextLinesITCase)
>   Time elapsed: 0.517 s  <<< ERROR!
> 2020-09-27T21:58:38.9199619Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2020-09-27T21:58:38.9200118Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> 2020-09-27T21:58:38.9200722Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:77)
> 2020-09-27T21:58:38.9201290Z  at 
> org.apache.flink.streaming.api.datastream.DataStreamUtils.collectRecordsFromUnboundedStream(DataStreamUtils.java:150)
> 2020-09-27T21:58:38.9201920Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testContinuousTextFileSource(FileSourceTextLinesITCase.java:136)
> 2020-09-27T21:58:38.9202570Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-09-27T21:58:38.9203054Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-09-27T21:58:38.9203539Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-09-27T21:58:38.9203968Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-09-27T21:58:38.9204369Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-09-27T21:58:38.9204844Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-09-27T21:58:38.9205359Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-09-27T21:58:38.9205814Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-09-27T21:58:38.9206240Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-09-27T21:58:38.9206611Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-09-27T21:58:38.9206971Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-09-27T21:58:38.9207404Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-09-27T21:58:38.9207971Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-09-27T21:58:38.9208404Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-09-27T21:58:38.9208877Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-09-27T21:58:38.9209279Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-09-27T21:58:38.9209680Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-09-27T21:58:38.9210064Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-09-27T21:58:38.9210476Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-09-27T21:58:38.9210881Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-09-27T21:58:38.9211272Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-09-27T21:58:38.9211638Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-09-27T21:58:38.9212305Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-09-27T21:58:38.9213157Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-09-27T21:58:38.9213663Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-09-27T21:58:38.9214123Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-09-27T21:58:38.9214620Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 

[jira] [Updated] (FLINK-19436) TPC-DS end-to-end test (Blink planner) failed during shutdown

2020-09-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19436:

Labels: test-stability  (was: )

> TPC-DS end-to-end test (Blink planner) failed during shutdown
> -
>
> Key: FLINK-19436
> URL: https://issues.apache.org/jira/browse/FLINK-19436
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7009=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9
> {code}
> 2020-09-27T22:37:53.2236467Z Stopping taskexecutor daemon (pid: 2992) on host 
> fv-az655.
> 2020-09-27T22:37:53.4450715Z Stopping standalonesession daemon (pid: 2699) on 
> host fv-az655.
> 2020-09-27T22:37:53.8014537Z Skipping taskexecutor daemon (pid: 11173), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8019740Z Skipping taskexecutor daemon (pid: 11561), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8022857Z Skipping taskexecutor daemon (pid: 11849), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8023616Z Skipping taskexecutor daemon (pid: 12180), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8024327Z Skipping taskexecutor daemon (pid: 12950), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8025027Z Skipping taskexecutor daemon (pid: 13472), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8025727Z Skipping taskexecutor daemon (pid: 16577), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8026417Z Skipping taskexecutor daemon (pid: 16959), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8027086Z Skipping taskexecutor daemon (pid: 17250), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8027770Z Skipping taskexecutor daemon (pid: 17601), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8028400Z Stopping taskexecutor daemon (pid: 18438) on 
> host fv-az655.
> 2020-09-27T22:37:53.8029314Z 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT/bin/taskmanager.sh:
>  line 99: 18438 Terminated  "${FLINK_BIN_DIR}"/flink-daemon.sh 
> $STARTSTOP $ENTRYPOINT "${ARGS[@]}"
> 2020-09-27T22:37:53.8029895Z [FAIL] Test script contains errors.
> 2020-09-27T22:37:53.8032092Z Checking for errors...
> 2020-09-27T22:37:55.3713368Z No errors in log files.
> 2020-09-27T22:37:55.3713935Z Checking for exceptions...
> 2020-09-27T22:37:56.9046391Z No exceptions in log files.
> 2020-09-27T22:37:56.9047333Z Checking for non-empty .out files...
> 2020-09-27T22:37:56.9064402Z No non-empty .out files.
> 2020-09-27T22:37:56.9064859Z 
> 2020-09-27T22:37:56.9065588Z [FAIL] 'TPC-DS end-to-end test (Blink planner)' 
> failed after 16 minutes and 54 seconds! Test exited with exit code 1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19436) TPC-DS end-to-end test (Blink planner) failed during shutdown

2020-09-27 Thread Dian Fu (Jira)
Dian Fu created FLINK-19436:
---

 Summary: TPC-DS end-to-end test (Blink planner) failed during 
shutdown
 Key: FLINK-19436
 URL: https://issues.apache.org/jira/browse/FLINK-19436
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.11.0
Reporter: Dian Fu


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7009=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9

{code}
2020-09-27T22:37:53.2236467Z Stopping taskexecutor daemon (pid: 2992) on host 
fv-az655.
2020-09-27T22:37:53.4450715Z Stopping standalonesession daemon (pid: 2699) on 
host fv-az655.
2020-09-27T22:37:53.8014537Z Skipping taskexecutor daemon (pid: 11173), because 
it is not running anymore on fv-az655.
2020-09-27T22:37:53.8019740Z Skipping taskexecutor daemon (pid: 11561), because 
it is not running anymore on fv-az655.
2020-09-27T22:37:53.8022857Z Skipping taskexecutor daemon (pid: 11849), because 
it is not running anymore on fv-az655.
2020-09-27T22:37:53.8023616Z Skipping taskexecutor daemon (pid: 12180), because 
it is not running anymore on fv-az655.
2020-09-27T22:37:53.8024327Z Skipping taskexecutor daemon (pid: 12950), because 
it is not running anymore on fv-az655.
2020-09-27T22:37:53.8025027Z Skipping taskexecutor daemon (pid: 13472), because 
it is not running anymore on fv-az655.
2020-09-27T22:37:53.8025727Z Skipping taskexecutor daemon (pid: 16577), because 
it is not running anymore on fv-az655.
2020-09-27T22:37:53.8026417Z Skipping taskexecutor daemon (pid: 16959), because 
it is not running anymore on fv-az655.
2020-09-27T22:37:53.8027086Z Skipping taskexecutor daemon (pid: 17250), because 
it is not running anymore on fv-az655.
2020-09-27T22:37:53.8027770Z Skipping taskexecutor daemon (pid: 17601), because 
it is not running anymore on fv-az655.
2020-09-27T22:37:53.8028400Z Stopping taskexecutor daemon (pid: 18438) on host 
fv-az655.
2020-09-27T22:37:53.8029314Z 
/home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT/bin/taskmanager.sh:
 line 99: 18438 Terminated  "${FLINK_BIN_DIR}"/flink-daemon.sh 
$STARTSTOP $ENTRYPOINT "${ARGS[@]}"
2020-09-27T22:37:53.8029895Z [FAIL] Test script contains errors.
2020-09-27T22:37:53.8032092Z Checking for errors...
2020-09-27T22:37:55.3713368Z No errors in log files.
2020-09-27T22:37:55.3713935Z Checking for exceptions...
2020-09-27T22:37:56.9046391Z No exceptions in log files.
2020-09-27T22:37:56.9047333Z Checking for non-empty .out files...
2020-09-27T22:37:56.9064402Z No non-empty .out files.
2020-09-27T22:37:56.9064859Z 
2020-09-27T22:37:56.9065588Z [FAIL] 'TPC-DS end-to-end test (Blink planner)' 
failed after 16 minutes and 54 seconds! Test exited with exit code 1
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19436) TPC-DS end-to-end test (Blink planner) failed during shutdown

2020-09-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19436:

Component/s: Tests

> TPC-DS end-to-end test (Blink planner) failed during shutdown
> -
>
> Key: FLINK-19436
> URL: https://issues.apache.org/jira/browse/FLINK-19436
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7009=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9
> {code}
> 2020-09-27T22:37:53.2236467Z Stopping taskexecutor daemon (pid: 2992) on host 
> fv-az655.
> 2020-09-27T22:37:53.4450715Z Stopping standalonesession daemon (pid: 2699) on 
> host fv-az655.
> 2020-09-27T22:37:53.8014537Z Skipping taskexecutor daemon (pid: 11173), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8019740Z Skipping taskexecutor daemon (pid: 11561), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8022857Z Skipping taskexecutor daemon (pid: 11849), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8023616Z Skipping taskexecutor daemon (pid: 12180), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8024327Z Skipping taskexecutor daemon (pid: 12950), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8025027Z Skipping taskexecutor daemon (pid: 13472), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8025727Z Skipping taskexecutor daemon (pid: 16577), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8026417Z Skipping taskexecutor daemon (pid: 16959), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8027086Z Skipping taskexecutor daemon (pid: 17250), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8027770Z Skipping taskexecutor daemon (pid: 17601), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8028400Z Stopping taskexecutor daemon (pid: 18438) on 
> host fv-az655.
> 2020-09-27T22:37:53.8029314Z 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT/bin/taskmanager.sh:
>  line 99: 18438 Terminated  "${FLINK_BIN_DIR}"/flink-daemon.sh 
> $STARTSTOP $ENTRYPOINT "${ARGS[@]}"
> 2020-09-27T22:37:53.8029895Z [FAIL] Test script contains errors.
> 2020-09-27T22:37:53.8032092Z Checking for errors...
> 2020-09-27T22:37:55.3713368Z No errors in log files.
> 2020-09-27T22:37:55.3713935Z Checking for exceptions...
> 2020-09-27T22:37:56.9046391Z No exceptions in log files.
> 2020-09-27T22:37:56.9047333Z Checking for non-empty .out files...
> 2020-09-27T22:37:56.9064402Z No non-empty .out files.
> 2020-09-27T22:37:56.9064859Z 
> 2020-09-27T22:37:56.9065588Z [FAIL] 'TPC-DS end-to-end test (Blink planner)' 
> failed after 16 minutes and 54 seconds! Test exited with exit code 1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19430) Translate page 'datastream_tutorial' into Chinese

2020-09-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19430:

Affects Version/s: (was: 1.11.0)

> Translate page 'datastream_tutorial' into Chinese
> -
>
> Key: FLINK-19430
> URL: https://issues.apache.org/jira/browse/FLINK-19430
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: hailong wang
>Priority: Minor
> Fix For: 1.12.0
>
>
> The page url 
> [datastream_tutorial|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream_tutorial.html]
> The doc is located at /dev/python/user-guide/datastream_tutorial.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19430) Translate page 'datastream_tutorial' into Chinese

2020-09-27 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202945#comment-17202945
 ] 

Dian Fu commented on FLINK-19430:
-

[~hailong wang] Have assigned it to you.

> Translate page 'datastream_tutorial' into Chinese
> -
>
> Key: FLINK-19430
> URL: https://issues.apache.org/jira/browse/FLINK-19430
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Minor
> Fix For: 1.12.0
>
>
> The page url 
> [datastream_tutorial|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream_tutorial.html]
> The doc is located at /dev/python/user-guide/datastream_tutorial.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19430) Translate page 'datastream_tutorial' into Chinese

2020-09-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-19430:
---

Assignee: hailong wang

> Translate page 'datastream_tutorial' into Chinese
> -
>
> Key: FLINK-19430
> URL: https://issues.apache.org/jira/browse/FLINK-19430
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Minor
> Fix For: 1.12.0
>
>
> The page url 
> [datastream_tutorial|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream_tutorial.html]
> The doc is located at /dev/python/user-guide/datastream_tutorial.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19429) Translate page 'Data Types' into Chinese

2020-09-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-19429:
---

Assignee: hailong wang

> Translate page 'Data Types' into Chinese
> 
>
> Key: FLINK-19429
> URL: https://issues.apache.org/jira/browse/FLINK-19429
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Minor
> Fix For: 1.12.0
>
>
> Translate the page 
> [data_types|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream-api-users-guide/data_types.html].
> The doc located in 
> "flink/docs/dev/python/datastream-api-users-guide/data_types.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19429) Translate page 'Data Types' into Chinese

2020-09-27 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202944#comment-17202944
 ] 

Dian Fu commented on FLINK-19429:
-

[~hailong wang] Thanks for working on this issue. I have assigned it to you.

> Translate page 'Data Types' into Chinese
> 
>
> Key: FLINK-19429
> URL: https://issues.apache.org/jira/browse/FLINK-19429
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Minor
> Fix For: 1.12.0
>
>
> Translate the page 
> [data_types|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream-api-users-guide/data_types.html].
> The doc located in 
> "flink/docs/dev/python/datastream-api-users-guide/data_types.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19429) Translate page 'Data Types' into Chinese

2020-09-27 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19429:

Affects Version/s: (was: 1.11.0)

> Translate page 'Data Types' into Chinese
> 
>
> Key: FLINK-19429
> URL: https://issues.apache.org/jira/browse/FLINK-19429
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Minor
> Fix For: 1.12.0
>
>
> Translate the page 
> [data_types|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream-api-users-guide/data_types.html].
> The doc located in 
> "flink/docs/dev/python/datastream-api-users-guide/data_types.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19409) The comment for getValue has wrong code in class ListView

2020-09-27 Thread Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202936#comment-17202936
 ] 

Liu commented on FLINK-19409:
-

Hi, [~jark]. Please review the code. Thank you.

> The comment for getValue has wrong code in class ListView
> -
>
> Key: FLINK-19409
> URL: https://issues.apache.org/jira/browse/FLINK-19409
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Liu
>Assignee: Liu
>Priority: Minor
>  Labels: pull-request-available
>
> The comment for getValue is as following currently:
> {code:java}
> *    @Override  
> *    public Long getValue(MyAccumulator accumulator) {  
> *        accumulator.list.add(id);  
> *        ... ...  
> *        accumulator.list.get()  
> *         ... ...  
> *        return accumulator.count;  
> *    }  
> {code}
>  Users may be confused with the code "accumulator.list.add(id); ". It should 
> be removed. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13356: [FLINK-16789][runtime][rest] Enable JMX RMI port retrieval via REST API

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13356:
URL: https://github.com/apache/flink/pull/13356#issuecomment-688955548


   
   ## CI report:
   
   * c95d0e33b9cd23264c899aae7df75c83d290f129 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7007)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-15578) Implement exactly-once JDBC sink

2020-09-27 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202915#comment-17202915
 ] 

Roman Khachatryan commented on FLINK-15578:
---

Hi [~klden],

There no blockers, it just has lower priority currently than other features.

However, I hope this to be included into 1.12.

Thanks.

> Implement exactly-once JDBC sink
> 
>
> Key: FLINK-15578
> URL: https://issues.apache.org/jira/browse/FLINK-15578
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As per discussion in the dev mailing list, there are two options:
>  # Write-ahead log
>  # Two-phase commit (XA)
> the latter being preferable.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13410: [FLINK-19247][docs-zh] Update Chinese documentation after removal of Kafka 0.10 and 0.11

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13410:
URL: https://github.com/apache/flink/pull/13410#issuecomment-694211098


   
   ## CI report:
   
   * 1ba111e2e377a1ade74832b2fa026298437f53dc Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7006)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13491:
URL: https://github.com/apache/flink/pull/13491#issuecomment-699620559


   
   ## CI report:
   
   * 0131e7e07336c24798a7d3b6692807c93a96a42c UNKNOWN
   * 20a9a58f2aa694681754572c716ee0b408484315 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7004)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13356: [FLINK-16789][runtime][rest] Enable JMX RMI port retrieval via REST API

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13356:
URL: https://github.com/apache/flink/pull/13356#issuecomment-688955548


   
   ## CI report:
   
   * 060bb38dd26df122e05fc232cd730b35717e84b0 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6471)
 
   * c95d0e33b9cd23264c899aae7df75c83d290f129 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7007)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13410: [FLINK-19247][docs-zh] Update Chinese documentation after removal of Kafka 0.10 and 0.11

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13410:
URL: https://github.com/apache/flink/pull/13410#issuecomment-694211098


   
   ## CI report:
   
   * 69930ae1221864a5ccc986ac23c6010b76043759 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6607)
 
   * 1ba111e2e377a1ade74832b2fa026298437f53dc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7006)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13482: test

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13482:
URL: https://github.com/apache/flink/pull/13482#issuecomment-698903502


   
   ## CI report:
   
   * 5fb5255b9edc3cd74b836a89489a0a81591a514f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7002)
 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6999)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13356: [FLINK-16789][runtime][rest] Enable JMX RMI port retrieval via REST API

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13356:
URL: https://github.com/apache/flink/pull/13356#issuecomment-688955548


   
   ## CI report:
   
   * 060bb38dd26df122e05fc232cd730b35717e84b0 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6471)
 
   * c95d0e33b9cd23264c899aae7df75c83d290f129 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13410: [FLINK-19247][docs-zh] Update Chinese documentation after removal of Kafka 0.10 and 0.11

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13410:
URL: https://github.com/apache/flink/pull/13410#issuecomment-694211098


   
   ## CI report:
   
   * 69930ae1221864a5ccc986ac23c6010b76043759 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6607)
 
   * 1ba111e2e377a1ade74832b2fa026298437f53dc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Shawn-Hx commented on pull request #13410: [FLINK-19247][docs-zh] Update Chinese documentation after removal of Kafka 0.10 and 0.11

2020-09-27 Thread GitBox


Shawn-Hx commented on pull request #13410:
URL: https://github.com/apache/flink/pull/13410#issuecomment-699653704


   Hi, @klion26 
   Thanks for your good suggestions !
   I have make some changes according to your advice. Please take a look.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13493: [FLINK-19433] [docs] Correct example of FROM_UNIXTIME function in document

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13493:
URL: https://github.com/apache/flink/pull/13493#issuecomment-699640495


   
   ## CI report:
   
   * 18baa5ec57c9dd038aa04bf84b81488ece0bb220 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7005)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Shawn-Hx commented on a change in pull request #13410: [FLINK-19247][docs-zh] Update Chinese documentation after removal of Kafka 0.10 and 0.11

2020-09-27 Thread GitBox


Shawn-Hx commented on a change in pull request #13410:
URL: https://github.com/apache/flink/pull/13410#discussion_r495586969



##
File path: docs/dev/connectors/kafka.zh.md
##
@@ -23,90 +23,32 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+Flink 提供了 [Apache Kafka](https://kafka.apache.org) 连接器,用于向 Kafka topic 
中读取或者写入数据,可提供精确一次的处理语义。
+
 * This will be replaced by the TOC
 {:toc}
 
-此连接器提供了访问 [Apache Kafka](https://kafka.apache.org/) 事件流的服务。
-
-Flink 提供了专门的 Kafka 连接器,向 Kafka topic 中读取或者写入数据。Flink Kafka Consumer 集成了 Flink 
的 Checkpoint 机制,可提供 exactly-once 的处理语义。为此,Flink 并不完全依赖于跟踪 Kafka 
消费组的偏移量,而是在内部跟踪和检查偏移量。
-
-根据你的用例和环境选择相应的包(maven artifact id)和类名。对于大多数用户来说,使用 `FlinkKafkaConsumer`( 
`flink-connector-kafka` 的一部分)是比较合适的。
-
-
-  
-
-  Maven 依赖
-  自从哪个版本开始支持
-  消费者和生产者的类名称
-  Kafka 版本
-  注意
-
-  
-  
-
-flink-connector-kafka{{ site.scala_version_suffix }}
-1.7.0
-FlinkKafkaConsumer
-FlinkKafkaProducer
->= 1.0.0
-
-这个通用的 Kafka 连接器尽力与 Kafka client 的最新版本保持同步。该连接器使用的 Kafka client 版本可能会在 
Flink 版本之间发生变化。从 Flink 1.9 版本开始,它使用 Kafka 2.2.0 client。当前 Kafka 客户端向后兼容 0.10.0 
或更高版本的 Kafka broker。
-但是对于 Kafka 0.11.x 和 0.10.x 版本,我们建议你分别使用专用的 
flink-connector-kafka-0.11{{ site.scala_version_suffix }} 和 
flink-connector-kafka-0.10{{ site.scala_version_suffix }} 连接器。
-
-
-  
-
-
-接着,在你的 maven 项目中导入连接器:
-
-{% highlight xml %}
-
-  org.apache.flink
-  flink-connector-kafka{{ site.scala_version_suffix }}
-  {{ site.version }}
-
-{% endhighlight %}
-
-请注意:目前流连接器还不是二进制分发的一部分。
-[在此处]({{ site.baseurl 
}}/zh/dev/project-configuration.html)可以了解到如何链接它们以实现在集群中执行。
-
-## 安装 Apache Kafka
-
-* 按照 [ Kafka 
快速入门](https://kafka.apache.org/documentation.html#quickstart)的说明下载代码并启动 Kafka 
服务器(每次启动应用程序之前都需要启动 Zookeeper 和 Kafka server)。
-* 如果 Kafka 和 Zookeeper 服务器运行在远端机器上,那么必须要将 `config/server.properties` 文件中的 
`advertised.host.name`属性设置为远端设备的 IP 地址。
-
-## Kafka 1.0.0+ 连接器
-
-从 Flink 1.7 开始,有一个新的通用 Kafka 连接器,它不跟踪特定的 Kafka 主版本。相反,它是在 Flink 发布时跟踪最新版本的 
Kafka。
-如果你的 Kafka broker 版本是 1.0.0 或 更新的版本,你应该使用这个 Kafka 连接器。
-如果你使用的是 Kafka 的旧版本( 0.11 或 0.10 ),那么你应该使用与 Kafka broker 版本相对应的连接器。
-
-### 兼容性
-
-通过 Kafka client API 和 broker 的兼容性保证,通用的 Kafka 连接器兼容较旧和较新的 Kafka broker。
-它兼容 Kafka broker 0.11.0 或者更高版本,具体兼容性取决于所使用的功能。有关 Kafka 兼容性的详细信息,请参考 [Kafka 
文档](https://kafka.apache.org/protocol.html#protocol_compatibility)。
-
-### 将 Kafka Connector 从 0.11 迁移到通用版本
+## 依赖
 
-以便执行迁移,请参考 [升级 Jobs 和 Flink 版本指南]({{ site.baseurl }}/zh/ops/upgrading.html):
-* 在全程中使用 Flink 1.9 或更新版本。
-* 不要同时升级 Flink 和 Operator。
-* 确保你的 Job 中所使用的 Kafka Consumer 和 Kafka Producer 分配了唯一的标识符(uid)。
-* 使用 stop with savepoint 的特性来执行 savepoint(例如,使用 `stop --withSavepoint`)[CLI 
命令]({{ site.baseurl }}/zh/ops/cli.html)。
-
-### 用法
-
-要使用通用的 Kafka 连接器,请为它添加依赖关系:
+Apache Flink 集成了通用的 Kafka 连接器,它会尽力与 Kafka client 的最新版本保持同步。
+该连接器使用的 Kafka client 版本可能会在 Flink 版本之间发生变化。
+当前 Kafka client 向后兼容 0.10.0 或更高版本的 Kafka broker。

Review comment:
   这里我不是很确定。
   英文文档中的原文是:"Modern Kafka clients are backwards compatible with broker 
versions 0.10.0 or later"。"backwards compatible" 直译的话应该是 “向后兼容” ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13482: test

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13482:
URL: https://github.com/apache/flink/pull/13482#issuecomment-698903502


   
   ## CI report:
   
   * 5fb5255b9edc3cd74b836a89489a0a81591a514f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7002)
 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6999)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13491:
URL: https://github.com/apache/flink/pull/13491#issuecomment-699620559


   
   ## CI report:
   
   * 0131e7e07336c24798a7d3b6692807c93a96a42c UNKNOWN
   * 994b24ac690a8ef806bd1f051fa14c22c76dda96 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7000)
 
   * 20a9a58f2aa694681754572c716ee0b408484315 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7004)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19429) Translate page 'Data Types' into Chinese

2020-09-27 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202845#comment-17202845
 ] 

hailong wang commented on FLINK-19429:
--

Hi [~hequn8128] [~dianfu], Could you please assgined it to me, thanks.

> Translate page 'Data Types' into Chinese
> 
>
> Key: FLINK-19429
> URL: https://issues.apache.org/jira/browse/FLINK-19429
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Minor
> Fix For: 1.12.0
>
>
> Translate the page 
> [data_types|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream-api-users-guide/data_types.html].
> The doc located in 
> "flink/docs/dev/python/datastream-api-users-guide/data_types.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19430) Translate page 'datastream_tutorial' into Chinese

2020-09-27 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202844#comment-17202844
 ] 

hailong wang commented on FLINK-19430:
--

Hi [~hequn8128] [~dianfu], Could you please assgined it to me, thanks.

> Translate page 'datastream_tutorial' into Chinese
> -
>
> Key: FLINK-19430
> URL: https://issues.apache.org/jira/browse/FLINK-19430
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Minor
> Fix For: 1.12.0
>
>
> The page url 
> [datastream_tutorial|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream_tutorial.html]
> The doc is located at /dev/python/user-guide/datastream_tutorial.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19429) Translate page 'Data Types' into Chinese

2020-09-27 Thread hailong wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hailong wang updated FLINK-19429:
-
Comment: was deleted

(was: I did not find the issue that deduplicate with it. If so, thank you for 
assigning it to me.)

> Translate page 'Data Types' into Chinese
> 
>
> Key: FLINK-19429
> URL: https://issues.apache.org/jira/browse/FLINK-19429
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Minor
> Fix For: 1.12.0
>
>
> Translate the page 
> [data_types|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream-api-users-guide/data_types.html].
> The doc located in 
> "flink/docs/dev/python/datastream-api-users-guide/data_types.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13493: [FLINK-19433] [docs] Correct example of FROM_UNIXTIME function in document

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13493:
URL: https://github.com/apache/flink/pull/13493#issuecomment-699640495


   
   ## CI report:
   
   * 18baa5ec57c9dd038aa04bf84b81488ece0bb220 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7005)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19416) Support Python datetime object in from_collection of Python DataStream

2020-09-27 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202841#comment-17202841
 ] 

Nicholas Jiang commented on FLINK-19416:


[~hxbks2ks], if no one pick this issue, I would like to resolve this issue.

> Support Python datetime object in from_collection of Python DataStream
> --
>
> Key: FLINK-19416
> URL: https://issues.apache.org/jira/browse/FLINK-19416
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Priority: Major
> Fix For: 1.12.0
>
>
> Support Python datetime object in from_collection of Python DataStream



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 commented on a change in pull request #13410: [FLINK-19247][docs-zh] Update Chinese documentation after removal of Kafka 0.10 and 0.11

2020-09-27 Thread GitBox


klion26 commented on a change in pull request #13410:
URL: https://github.com/apache/flink/pull/13410#discussion_r495574668



##
File path: docs/dev/connectors/kafka.zh.md
##
@@ -23,90 +23,32 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+Flink 提供了 [Apache Kafka](https://kafka.apache.org) 连接器,用于向 Kafka topic 
中读取或者写入数据,可提供精确一次的处理语义。
+
 * This will be replaced by the TOC
 {:toc}
 
-此连接器提供了访问 [Apache Kafka](https://kafka.apache.org/) 事件流的服务。
-
-Flink 提供了专门的 Kafka 连接器,向 Kafka topic 中读取或者写入数据。Flink Kafka Consumer 集成了 Flink 
的 Checkpoint 机制,可提供 exactly-once 的处理语义。为此,Flink 并不完全依赖于跟踪 Kafka 
消费组的偏移量,而是在内部跟踪和检查偏移量。
-
-根据你的用例和环境选择相应的包(maven artifact id)和类名。对于大多数用户来说,使用 `FlinkKafkaConsumer`( 
`flink-connector-kafka` 的一部分)是比较合适的。
-
-
-  
-
-  Maven 依赖
-  自从哪个版本开始支持
-  消费者和生产者的类名称
-  Kafka 版本
-  注意
-
-  
-  
-
-flink-connector-kafka{{ site.scala_version_suffix }}
-1.7.0
-FlinkKafkaConsumer
-FlinkKafkaProducer
->= 1.0.0
-
-这个通用的 Kafka 连接器尽力与 Kafka client 的最新版本保持同步。该连接器使用的 Kafka client 版本可能会在 
Flink 版本之间发生变化。从 Flink 1.9 版本开始,它使用 Kafka 2.2.0 client。当前 Kafka 客户端向后兼容 0.10.0 
或更高版本的 Kafka broker。
-但是对于 Kafka 0.11.x 和 0.10.x 版本,我们建议你分别使用专用的 
flink-connector-kafka-0.11{{ site.scala_version_suffix }} 和 
flink-connector-kafka-0.10{{ site.scala_version_suffix }} 连接器。
-
-
-  
-
-
-接着,在你的 maven 项目中导入连接器:
-
-{% highlight xml %}
-
-  org.apache.flink
-  flink-connector-kafka{{ site.scala_version_suffix }}
-  {{ site.version }}
-
-{% endhighlight %}
-
-请注意:目前流连接器还不是二进制分发的一部分。
-[在此处]({{ site.baseurl 
}}/zh/dev/project-configuration.html)可以了解到如何链接它们以实现在集群中执行。
-
-## 安装 Apache Kafka
-
-* 按照 [ Kafka 
快速入门](https://kafka.apache.org/documentation.html#quickstart)的说明下载代码并启动 Kafka 
服务器(每次启动应用程序之前都需要启动 Zookeeper 和 Kafka server)。
-* 如果 Kafka 和 Zookeeper 服务器运行在远端机器上,那么必须要将 `config/server.properties` 文件中的 
`advertised.host.name`属性设置为远端设备的 IP 地址。
-
-## Kafka 1.0.0+ 连接器
-
-从 Flink 1.7 开始,有一个新的通用 Kafka 连接器,它不跟踪特定的 Kafka 主版本。相反,它是在 Flink 发布时跟踪最新版本的 
Kafka。
-如果你的 Kafka broker 版本是 1.0.0 或 更新的版本,你应该使用这个 Kafka 连接器。
-如果你使用的是 Kafka 的旧版本( 0.11 或 0.10 ),那么你应该使用与 Kafka broker 版本相对应的连接器。
-
-### 兼容性
-
-通过 Kafka client API 和 broker 的兼容性保证,通用的 Kafka 连接器兼容较旧和较新的 Kafka broker。
-它兼容 Kafka broker 0.11.0 或者更高版本,具体兼容性取决于所使用的功能。有关 Kafka 兼容性的详细信息,请参考 [Kafka 
文档](https://kafka.apache.org/protocol.html#protocol_compatibility)。
-
-### 将 Kafka Connector 从 0.11 迁移到通用版本
+## 依赖
 
-以便执行迁移,请参考 [升级 Jobs 和 Flink 版本指南]({{ site.baseurl }}/zh/ops/upgrading.html):
-* 在全程中使用 Flink 1.9 或更新版本。
-* 不要同时升级 Flink 和 Operator。
-* 确保你的 Job 中所使用的 Kafka Consumer 和 Kafka Producer 分配了唯一的标识符(uid)。
-* 使用 stop with savepoint 的特性来执行 savepoint(例如,使用 `stop --withSavepoint`)[CLI 
命令]({{ site.baseurl }}/zh/ops/cli.html)。
-
-### 用法
-
-要使用通用的 Kafka 连接器,请为它添加依赖关系:
+Apache Flink 集成了通用的 Kafka 连接器,它会尽力与 Kafka client 的最新版本保持同步。
+该连接器使用的 Kafka client 版本可能会在 Flink 版本之间发生变化。
+当前 Kafka client 向后兼容 0.10.0 或更高版本的 Kafka broker。
+有关 Kafka 兼容性的更多细节,请参考  [Kafka 
官方文档](https://kafka.apache.org/protocol.html#protocol_compatibility)。
 
+
+
 {% highlight xml %}
 
-  org.apache.flink
-  flink-connector-kafka{{ site.scala_version_suffix }}
-  {{ site.version }}
+   org.apache.flink
+   flink-connector-kafka{{ site.scala_version_suffix 
}}
+   {{ site.version }}
 
-{% endhighlight %}
+{% endhighlight %} 
+
+
 
-然后,实例化 source( `FlinkKafkaConsumer`)和 sink( 
`FlinkKafkaProducer`)。除了从模块和类名中删除了特定的 Kafka 版本外,这个 API 向后兼容 Kafka 0.11 版本的 
connector。
+Flink 目前的流连接器还不是二进制发行版的一部分。
+[在此处]({{ site.baseurl 
}}/zh/dev/project-configuration.html)可以了解到如何链接它们以实现在集群中执行。

Review comment:
   1 链接建议使用 `{% link %}` 的方式,参考邮件列表 [1]
   2 `如何链接它们以实现在集群中执行` 这个还能否优化下呢,现在知道是什么意思,但是读起来还有点拗口
   
   [1] 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Reminder-Prefer-link-tag-in-documentation-td42362.html

##
File path: docs/dev/connectors/kafka.zh.md
##
@@ -431,39 +350,42 @@ stream.addSink(myProducer);
 {% highlight scala %}
 val stream: DataStream[String] = ...
 
+Properties properties = new Properties
+properties.setProperty("bootstrap.servers", "localhost:9092")
+
 val myProducer = new FlinkKafkaProducer[String](
-"localhost:9092", // broker 列表
 "my-topic",   // 目标 topic
-new SimpleStringSchema)   // 序列化 schema
-
-// 0.10+ 版本的 Kafka 允许在将记录写入 Kafka 时附加记录的事件时间戳;
-// 此方法不适用于早期版本的 Kafka
-myProducer.setWriteTimestampToKafka(true)
+new SimpleStringSchema(), // 序列化 schema
+properties,   // producer 配置
+FlinkKafkaProducer.Semantic.EXACTLY_ONCE) // 容错
 
 stream.addSink(myProducer)
 {% endhighlight %}
 
 
 
-上面的例子演示了创建 Flink Kafka Producer 来将流消息写入单个 Kafka 目标 topic 的基本用法。
-对于更高级的用法,这还有其他构造函数变体允许提供以下内容:
+## `SerializationSchema`
 
- * *提供自定义属性*:producer 允许为内部 `KafkaProducer` 提供自定义属性配置。有关如何配置 Kafka Producer 
的详细信息,请参阅  [Apache Kafka 文档](https://kafka.apache.org/documentation.html)。
- 

[GitHub] [flink] flinkbot commented on pull request #13493: [FLINK-19433] [docs] Correct example of FROM_UNIXTIME function in document

2020-09-27 Thread GitBox


flinkbot commented on pull request #13493:
URL: https://github.com/apache/flink/pull/13493#issuecomment-699640495


   
   ## CI report:
   
   * 18baa5ec57c9dd038aa04bf84b81488ece0bb220 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13491:
URL: https://github.com/apache/flink/pull/13491#issuecomment-699620559


   
   ## CI report:
   
   * 0131e7e07336c24798a7d3b6692807c93a96a42c UNKNOWN
   * 994b24ac690a8ef806bd1f051fa14c22c76dda96 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7000)
 
   * 20a9a58f2aa694681754572c716ee0b408484315 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7004)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13493: [FLINK-19433] [docs] Correct example of FROM_UNIXTIME function in document

2020-09-27 Thread GitBox


flinkbot commented on pull request #13493:
URL: https://github.com/apache/flink/pull/13493#issuecomment-699637870


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 18baa5ec57c9dd038aa04bf84b81488ece0bb220 (Sun Sep 27 
13:47:26 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19433) An Error example of FROM_UNIXTIME function in document

2020-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19433:
---
Labels: pull-request-available  (was: )

> An Error example of FROM_UNIXTIME function in document
> --
>
> Key: FLINK-19433
> URL: https://issues.apache.org/jira/browse/FLINK-19433
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Reporter: Kyle Zhang
>Assignee: Kyle Zhang
>Priority: Major
>  Labels: pull-request-available
>
> In the 
> documentation:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/functions/systemFunctions.html#temporal-functions]
> There is an example in FROM_UNIXTIME function
> {code:java}
> E.g., FROM_UNIXTIME(44) returns '1970-01-01 09:00:44' if in UTC time zone, 
> but returns '1970-01-01 09:00:44' if in 'Asia/Tokyo' time zone.
> {code}
> However, the correct result should be 1970-01-01 00:00:44 in UTC time zone
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] secondChoice opened a new pull request #13493: [FLINK-19433] [docs] Correct example of FROM_UNIXTIME function in document

2020-09-27 Thread GitBox


secondChoice opened a new pull request #13493:
URL: https://github.com/apache/flink/pull/13493


   
   
   ## What is the purpose of the change
   
   Correct the example of FROM_UNIXTIME function in document
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no




This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13491:
URL: https://github.com/apache/flink/pull/13491#issuecomment-699620559


   
   ## CI report:
   
   * 0131e7e07336c24798a7d3b6692807c93a96a42c UNKNOWN
   * 994b24ac690a8ef806bd1f051fa14c22c76dda96 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7000)
 
   * 20a9a58f2aa694681754572c716ee0b408484315 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-19433) An Error example of FROM_UNIXTIME function in document

2020-09-27 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li reassigned FLINK-19433:
--

Assignee: Kyle Zhang

> An Error example of FROM_UNIXTIME function in document
> --
>
> Key: FLINK-19433
> URL: https://issues.apache.org/jira/browse/FLINK-19433
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Reporter: Kyle Zhang
>Assignee: Kyle Zhang
>Priority: Major
>
> In the 
> documentation:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/functions/systemFunctions.html#temporal-functions]
> There is an example in FROM_UNIXTIME function
> {code:java}
> E.g., FROM_UNIXTIME(44) returns '1970-01-01 09:00:44' if in UTC time zone, 
> but returns '1970-01-01 09:00:44' if in 'Asia/Tokyo' time zone.
> {code}
> However, the correct result should be 1970-01-01 00:00:44 in UTC time zone
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19433) An Error example of FROM_UNIXTIME function in document

2020-09-27 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202825#comment-17202825
 ] 

Benchao Li commented on FLINK-19433:


[~KyleZhang] Thanks for reporting this, assigned to you.

> An Error example of FROM_UNIXTIME function in document
> --
>
> Key: FLINK-19433
> URL: https://issues.apache.org/jira/browse/FLINK-19433
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Reporter: Kyle Zhang
>Priority: Major
>
> In the 
> documentation:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/functions/systemFunctions.html#temporal-functions]
> There is an example in FROM_UNIXTIME function
> {code:java}
> E.g., FROM_UNIXTIME(44) returns '1970-01-01 09:00:44' if in UTC time zone, 
> but returns '1970-01-01 09:00:44' if in 'Asia/Tokyo' time zone.
> {code}
> However, the correct result should be 1970-01-01 00:00:44 in UTC time zone
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] SteNicholas commented on pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


SteNicholas commented on pull request #13491:
URL: https://github.com/apache/flink/pull/13491#issuecomment-699631774


   @HuangXingBo @dianfu  I have already followed up with your comments. Please 
review this again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13492: [FLINK-19181][python] Make python processes respect the calculated managed memory fraction

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13492:
URL: https://github.com/apache/flink/pull/13492#issuecomment-699626263


   
   ## CI report:
   
   * af73558800083e341dcd9c89819cd00b0572696b UNKNOWN
   * ad9145cd549c242e40862c8ce21cf52b06927f00 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7003)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19435) jdbc JDBCOutputFormat open function invoke Class.forName(drivername)

2020-09-27 Thread xiaodao (Jira)
xiaodao created FLINK-19435:
---

 Summary: jdbc JDBCOutputFormat open function invoke 
Class.forName(drivername)
 Key: FLINK-19435
 URL: https://issues.apache.org/jira/browse/FLINK-19435
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC
Affects Versions: 1.10.2
Reporter: xiaodao
 Fix For: 1.10.3


when we sink data to multi jdbc outputformat , 

```

protected void establishConnection() throws SQLException, 
ClassNotFoundException {
 Class.forName(drivername);
 if (username == null) {
 connection = DriverManager.getConnection(dbURL);
 } else {
 connection = DriverManager.getConnection(dbURL, username, password);
 }
}

```

may cause jdbc driver deadlock. it need to change to synchronized function.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19433) An Error example of FROM_UNIXTIME function in document

2020-09-27 Thread Kyle Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202818#comment-17202818
 ] 

Kyle Zhang commented on FLINK-19433:


Hi [~libenchao], if you are watching is issue, could you assign it to me? thx :)

> An Error example of FROM_UNIXTIME function in document
> --
>
> Key: FLINK-19433
> URL: https://issues.apache.org/jira/browse/FLINK-19433
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Reporter: Kyle Zhang
>Priority: Major
>
> In the 
> documentation:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/functions/systemFunctions.html#temporal-functions]
> There is an example in FROM_UNIXTIME function
> {code:java}
> E.g., FROM_UNIXTIME(44) returns '1970-01-01 09:00:44' if in UTC time zone, 
> but returns '1970-01-01 09:00:44' if in 'Asia/Tokyo' time zone.
> {code}
> However, the correct result should be 1970-01-01 00:00:44 in UTC time zone
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13492: [FLINK-19181][python] Make python processes respect the calculated managed memory fraction

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13492:
URL: https://github.com/apache/flink/pull/13492#issuecomment-699626263


   
   ## CI report:
   
   * af73558800083e341dcd9c89819cd00b0572696b UNKNOWN
   * ad9145cd549c242e40862c8ce21cf52b06927f00 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13482: test

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13482:
URL: https://github.com/apache/flink/pull/13482#issuecomment-698903502


   
   ## CI report:
   
   * eea7bbdffba687cacc1d5d80fe00113ec5a0c735 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6998)
 
   * 5fb5255b9edc3cd74b836a89489a0a81591a514f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6999)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7002)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11509: [FLINK-16753] Use CheckpointException to wrap exceptions thrown from AsyncCheckpointRunnable

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #11509:
URL: https://github.com/apache/flink/pull/11509#issuecomment-603811265


   
   ## CI report:
   
   * e01cd5124932776df8549097e1d0d57a31bf3cc4 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6996)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


SteNicholas commented on a change in pull request #13491:
URL: https://github.com/apache/flink/pull/13491#discussion_r495566680



##
File path: flink-python/pyflink/table/table_environment.py
##
@@ -1770,11 +1771,19 @@ def from_data_stream(self, data_stream: DataStream, 
fields: List[str] = None) ->
of the Table
 :return: The converted Table.
 """
-if fields is not None:
-j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream, 
fields)
-else:
+j_table = None
+if len(fields) == 0:
 j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream)
-return Table(j_table=j_table, t_env=self._j_tenv)
+elif len(fields) == 1 and isinstance(fields[0], str):
+j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream, 
fields[0])
+elif len(fields) > 0 and \
+[isinstance(f, Expression) for f in fields] == [True] * 
len(fields):
+gateway = get_gateway()
+j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream,
+  
to_jarray(gateway.jvm.Expression,
+
[_get_java_expression(f)
+ for f in fields]))
+return None if j_table is None else Table(j_table=j_table, t_env=self)

Review comment:
   @dianfu @HuangXingBo OK, I will modify this return value as you 
mentioned.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


SteNicholas commented on a change in pull request #13491:
URL: https://github.com/apache/flink/pull/13491#discussion_r495566600



##
File path: flink-python/pyflink/table/table_environment.py
##
@@ -1770,11 +1771,19 @@ def from_data_stream(self, data_stream: DataStream, 
fields: List[str] = None) ->
of the Table
 :return: The converted Table.
 """
-if fields is not None:
-j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream, 
fields)
-else:
+j_table = None
+if len(fields) == 0:
 j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream)
-return Table(j_table=j_table, t_env=self._j_tenv)
+elif len(fields) == 1 and isinstance(fields[0], str):
+j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream, 
fields[0])

Review comment:
   @HuangXingBo Yes, I would like to add warn log to tell users with this 
deprecated method.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on a change in pull request #13491: [FLINK-19417][python] Fix the bug of the method from_data_stream in table_environement

2020-09-27 Thread GitBox


dianfu commented on a change in pull request #13491:
URL: https://github.com/apache/flink/pull/13491#discussion_r495565821



##
File path: flink-python/pyflink/table/table_environment.py
##
@@ -1770,11 +1771,19 @@ def from_data_stream(self, data_stream: DataStream, 
fields: List[str] = None) ->
of the Table
 :return: The converted Table.
 """
-if fields is not None:
-j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream, 
fields)
-else:
+j_table = None
+if len(fields) == 0:
 j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream)
-return Table(j_table=j_table, t_env=self._j_tenv)
+elif len(fields) == 1 and isinstance(fields[0], str):
+j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream, 
fields[0])
+elif len(fields) > 0 and \
+[isinstance(f, Expression) for f in fields] == [True] * 
len(fields):

Review comment:
   what about `all(isinstance(f, Expression) for f in fields)`

##
File path: flink-python/pyflink/table/table_environment.py
##
@@ -1770,11 +1771,19 @@ def from_data_stream(self, data_stream: DataStream, 
fields: List[str] = None) ->
of the Table
 :return: The converted Table.
 """
-if fields is not None:
-j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream, 
fields)
-else:
+j_table = None
+if len(fields) == 0:
 j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream)
-return Table(j_table=j_table, t_env=self._j_tenv)
+elif len(fields) == 1 and isinstance(fields[0], str):
+j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream, 
fields[0])
+elif len(fields) > 0 and \
+[isinstance(f, Expression) for f in fields] == [True] * 
len(fields):
+gateway = get_gateway()
+j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream,
+  
to_jarray(gateway.jvm.Expression,

Review comment:
   can use to_expression_jarray

##
File path: flink-python/pyflink/table/table_environment.py
##
@@ -1770,11 +1771,19 @@ def from_data_stream(self, data_stream: DataStream, 
fields: List[str] = None) ->
of the Table
 :return: The converted Table.
 """
-if fields is not None:
-j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream, 
fields)
-else:
+j_table = None
+if len(fields) == 0:
 j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream)
-return Table(j_table=j_table, t_env=self._j_tenv)
+elif len(fields) == 1 and isinstance(fields[0], str):
+j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream, 
fields[0])
+elif len(fields) > 0 and \
+[isinstance(f, Expression) for f in fields] == [True] * 
len(fields):
+gateway = get_gateway()
+j_table = self._j_tenv.fromDataStream(data_stream._j_data_stream,
+  
to_jarray(gateway.jvm.Expression,
+
[_get_java_expression(f)
+ for f in fields]))
+return None if j_table is None else Table(j_table=j_table, t_env=self)

Review comment:
   I agree with @HuangXingBo, could refer to Table.select as an example.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13492: [FLINK-19181][python] Make python processes respect the calculated managed memory fraction

2020-09-27 Thread GitBox


flinkbot commented on pull request #13492:
URL: https://github.com/apache/flink/pull/13492#issuecomment-699626263


   
   ## CI report:
   
   * af73558800083e341dcd9c89819cd00b0572696b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13482: test

2020-09-27 Thread GitBox


flinkbot edited a comment on pull request #13482:
URL: https://github.com/apache/flink/pull/13482#issuecomment-698903502


   
   ## CI report:
   
   * 167f0f9f5ed65f270bbdff26795639a81617dbdb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6994)
 
   * eea7bbdffba687cacc1d5d80fe00113ec5a0c735 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6998)
 
   * 5fb5255b9edc3cd74b836a89489a0a81591a514f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6999)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7002)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13492: [FLINK-19181][python] Make python processes respect the calculated managed memory fraction

2020-09-27 Thread GitBox


flinkbot commented on pull request #13492:
URL: https://github.com/apache/flink/pull/13492#issuecomment-699625449


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit af73558800083e341dcd9c89819cd00b0572696b (Sun Sep 27 
11:53:52 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19181) Make python processes respect the calculated managed memory fraction

2020-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19181:
---
Labels: pull-request-available  (was: )

> Make python processes respect the calculated managed memory fraction
> 
>
> Key: FLINK-19181
> URL: https://issues.apache.org/jira/browse/FLINK-19181
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Xintong Song
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >