[GitHub] [flink] flinkbot commented on pull request #12195: [FLINK-17449][sql-parser][table-api-java][table-planner-blink][hive] …

2020-05-16 Thread GitBox


flinkbot commented on pull request #12195:
URL: https://github.com/apache/flink/pull/12195#issuecomment-629748103


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit eef0ff2deffeb35b7aa34da7e35c07a054bd1643 (Sun May 17 
05:59:08 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-17449).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] guoweiM commented on pull request #12132: [FLINK-17593][Connectors/FileSystem] Support arbitrary recovery mechanism for PartFileWriter

2020-05-16 Thread GitBox


guoweiM commented on pull request #12132:
URL: https://github.com/apache/flink/pull/12132#issuecomment-629748020


   HI, @aljoscha @kl0u 
   I resolve all the comments. 
   1. Change the name of `PartFileFactory` to `BucketWriter`. Change the 
`PartFileWriter` to `InProgressFileWriter`.
   2. Update the `BucketStateSerializerTest`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17449) Implement ADD/DROP partitions

2020-05-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17449:
---
Labels: pull-request-available  (was: )

> Implement ADD/DROP partitions
> -
>
> Key: FLINK-17449
> URL: https://issues.apache.org/jira/browse/FLINK-17449
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Table SQL / API
>Reporter: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Introduce ADD/DROP partitions operations. Will only implement syntax for the 
> Hive parser in this ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache opened a new pull request #12195: [FLINK-17449][sql-parser][table-api-java][table-planner-blink][hive] …

2020-05-16 Thread GitBox


lirui-apache opened a new pull request #12195:
URL: https://github.com/apache/flink/pull/12195


   …Implement ADD/DROP partitions
   
   
   
   ## What is the purpose of the change
   
   To implement ADD/DROP PARTITIONS for Hive dialect.
   
   
   ## Brief change log
   
 - Add SqlNodes for add/drop partitions.
 - Add AlterTableOperations for add/drop partitions.
 - Hook up in the planner.
 - Add test cases.
   
   ## Verifying this change
   
   Added test cases.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? docs
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12073: [FLINK-17735][streaming] Add specialized collecting iterator

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12073:
URL: https://github.com/apache/flink/pull/12073#issuecomment-626524463


   
   ## CI report:
   
   * 424d752d1d5e49c752ccd79561ce5cfcd5ea7d1d UNKNOWN
   * ca26d6edd7772ee46d24b05c01952a10887eb3f7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1574)
 
   * 5ff489bbcfdc393ed5835f5d0183d374cd0acb3f UNKNOWN
   * 579236dbe523ccc71e62f4f9becdf0937d3b1bf4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12073: [FLINK-17735][streaming] Add specialized collecting iterator

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12073:
URL: https://github.com/apache/flink/pull/12073#issuecomment-626524463


   
   ## CI report:
   
   * 424d752d1d5e49c752ccd79561ce5cfcd5ea7d1d UNKNOWN
   * ca26d6edd7772ee46d24b05c01952a10887eb3f7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1574)
 
   * 5ff489bbcfdc393ed5835f5d0183d374cd0acb3f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12194: [FLINK-17764][Examples]Update tips about the default planner when the planner parameter value is not recognized

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12194:
URL: https://github.com/apache/flink/pull/12194#issuecomment-629741058


   
   ## CI report:
   
   * b646f1df4c1dc9797662fac8fe7f54c6a3de4682 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1587)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] TsReaper commented on a change in pull request #12073: [FLINK-17735][streaming] Add specialized collecting iterator

2020-05-16 Thread GitBox


TsReaper commented on a change in pull request #12073:
URL: https://github.com/apache/flink/pull/12073#discussion_r426216888



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectResultFetcher.java
##
@@ -0,0 +1,343 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators.collect;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.JobExecutionResult;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.accumulators.SerializedListAccumulator;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import 
org.apache.flink.api.common.typeutils.base.array.BytePrimitiveArraySerializer;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.jobgraph.OperatorID;
+import 
org.apache.flink.runtime.operators.coordination.CoordinationRequestGateway;
+import org.apache.flink.util.Preconditions;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * A fetcher which fetches query results from sink and provides exactly-once 
semantics.
+ */
+public class CollectResultFetcher {
+
+   private static final int DEFAULT_RETRY_MILLIS = 100;
+   private static final long DEFAULT_ACCUMULATOR_GET_MILLIS = 1;
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(CollectResultFetcher.class);
+
+   private final CompletableFuture operatorIdFuture;
+   private final String accumulatorName;
+   private final int retryMillis;
+
+   private ResultBuffer buffer;
+
+   private JobClient jobClient;
+   private boolean terminated;
+   private boolean closed;
+
+   public CollectResultFetcher(
+   CompletableFuture operatorIdFuture,
+   TypeSerializer serializer,
+   String accumulatorName) {
+   this(
+   operatorIdFuture,
+   serializer,
+   accumulatorName,
+   DEFAULT_RETRY_MILLIS);
+   }
+
+   @VisibleForTesting
+   public CollectResultFetcher(
+   CompletableFuture operatorIdFuture,
+   TypeSerializer serializer,
+   String accumulatorName,
+   int retryMillis) {
+   this.operatorIdFuture = operatorIdFuture;
+   this.accumulatorName = accumulatorName;
+   this.retryMillis = retryMillis;
+
+   this.buffer = new ResultBuffer(serializer);
+
+   this.terminated = false;
+   }
+
+   public void setJobClient(JobClient jobClient) {
+   Preconditions.checkArgument(
+   jobClient instanceof CoordinationRequestGateway,
+   "Job client must be a CoordinationRequestGateway. This 
is a bug.");
+   this.jobClient = jobClient;
+   }
+
+   @SuppressWarnings("unchecked")
+   public T next() {
+   if (closed) {
+   return null;
+   }
+
+   T res = buffer.next();
+   if (res != null) {
+   // we still have user-visible results, just use them
+   return res;
+   } else if (terminated) {
+   // no user-visible results, but job has terminated, we 
have to return
+   return null;
+   }
+
+   // we're going to fetch some more
+   while (true) {
+   if (isJobTerminated()) {
+   // job terminated, read results from accumulator
+   terminated = true;

[GitHub] [flink] flinkbot edited a comment on pull request #12193: [FLINK-17759][runtime] Remove unused RestartIndividualStrategy

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12193:
URL: https://github.com/apache/flink/pull/12193#issuecomment-629673659


   
   ## CI report:
   
   * 3f6a40a4f0933023bc3172529ed0c7d5a6c422fb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1569)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   * 9af69eb96e9a0ddaff4937e9d926feff92439f32 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1552)
 
   * aa6298086f01efe5d6ddd1356d7e289804f57a9b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1585)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12029: [FLINK-17451][sql-parser][table-planner-blink][hive] Implement view D…

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12029:
URL: https://github.com/apache/flink/pull/12029#issuecomment-625673739


   
   ## CI report:
   
   * 9405ea4470dc022ffb514f603396fc6bb2582835 UNKNOWN
   * c1ad4b5d93e10b76fad14269f46fa62e4d771bed Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1554)
 
   * cbf5ef6f14500b63eafb43901b3efa6db28c22b2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1586)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426215899



##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。
+ 基于 dev 邮件讨论的 Jira 工单需要链接到该讨论,并总结结果。
 
 
 
-**Requirements for a Jira ticket to get consensus:**
+**Jira 工单获得共识的要求:**
 
-  - Formal requirements
- - The *Title* describes the problem concisely.
- - The *Description* gives all the details needed to understand the 
problem or feature request.
- - The *Component* field is set: Many committers and contributors only 
focus on certain subsystems of Flink. Setting the appropriate component is 
important for getting their attention.
-  - There is **agreement** that the ticket solves a valid problem, and that it 
is a **good fit** for Flink.
-The Flink community considers the following aspects:
- - Does the contribution alter the behavior of features or components in a 
way that it may break previous users’ programs and setups? If yes, there needs 
to be a discussion and agreement that this change is desirable.
- - Does the contribution conceptually fit well into Flink? Is it too much 
of a special case such that it makes things more complicated for the common 
case, or bloats the abstractions / APIs?
- - Does the feature fit well into Flink’s architecture? Will it scale and 
keep Flink flexible for the future, or will the feature restrict Flink in the 
future?
- - Is the feature a significant new addition (rather than an improvement 
to an existing part)? If yes, will the Flink community commit to maintaining 
this feature?
- - Does this feature align well with Flink's roadmap and currently ongoing 
efforts?
- - Does the feature produce added value for Flink users or developers? Or 
does it introduce the risk of regression without adding relevant user or 
developer benefit?
- - Could the contribution live in another repository, e.g., Apache Bahir 
or another external repository?
- - Is this a contribution just for the sake of getting a commit in an open 
source project (fixing typos, style changes merely for taste reasons)
-  - There is **consensus** on how to solve the problem. This includes 
considerations such as
-- API and data backwards compatibility and migration strategies
-- Testing strategies
-- Impact on Flink's build time
-- Dependencies and their licenses
+  - 正式要求
+ - 描述问题的 *Title* 要简明扼要。
+ - 在 *Description* 中要提供了解问题或功能请求所需的所有详细信息。
+ - 要设置 *Component* 字段:许多 committers 和贡献者,只专注于 Flink 
的某些子系统。设置适当的组件标签对于引起他们的注意很重要。
+  - 社区*一致同意*使用工单是有效解决问题的方法,而且这**非常适合** Flink。 
+Flink 社区考虑了以下几个方面:
+ - 这种贡献是否会改变特性或组件的性能,从而破坏以前的用户程序和设置?如果是,那么就需要讨论并达成一致意见,证明这种改变是可取的。
+ - 这个贡献在概念上是否适合 Flink ?这是否是一种特殊场景?支持这种场景后会导致通用的场景变得更复杂,还是使整理抽象或者 APIs 
变得更臃肿?
+ - 该功能是否适合 Flink 的架构?它是否易扩展并保持 Flink 未来的灵活性,或者该功能将来会限制 Flink 吗?
+ - 该特性是一个重要的新增内容(而不是对现有内容的改进)吗?如果是,Flink 社区会承诺维护这个特性吗?
+ - 这个特性是否与 Flink 的路线图以及当前正在进行的工作内容一致?
+ - 该特性是否为 Flink 用户或开发人员带来了附加价值?或者它引入了回归的风险而没有给相关的用户或开发人员带来好处?
+ - 该贡献是否存在于其他仓库中,例如 Apache Bahir 或者其他第三方库?
+ - 这仅仅是为了在开源项目中获得提交而做出的贡献吗(仅仅是为了获得贡献而贡献,才去修复拼写错误、改变代码风格)?
+  - 在如何解决这个问题上已有**共识**,包括以下需要考虑的因素
+- API、数据向后兼容性和迁移策略
+- 测试策略
+- 对 Flink 构建时间的影响
+- 依赖关系及其许可证
 
-If a change is identified as a large or controversial change in the discussion 
on Jira, it might require a [Flink Improvement Proposal 
(FLIP)](https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals)
 or a discussion on the [dev mailing list]( {{ site.base 
}}/community.html#mailing-lists) to reach agreement and consensus.
+如果在 

[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426215858



##
File path: contributing/contribute-code.zh.md
##
@@ -79,44 +79,44 @@ Apache Flink is maintained, improved, and extended by code 
contributions of volu
 
 
 
-Note: The code contribution process has changed recently (June 
2019). The community https://lists.apache.org/thread.html/1e2b85d0095331606ad0411ca028f061382af08138776146589914f8@%3Cdev.flink.apache.org%3E;>decided
 to shift the "backpressure" from pull requests to Jira, by requiring 
contributors to get consensus (indicated by being assigned to the ticket) 
before opening a pull request.
+注意:最近(2019 年 6 月),代码贡献步骤有改动。社区https://lists.apache.org/thread.html/1e2b85d0095331606ad0411ca028f061382af08138776146589914f8@%3Cdev.flink.apache.org%3E;>决定将原来直接提交
 pull request 的方式转移到 Jira 上,要求贡献者在创建 pull request 之前需在 Jira 
上达成共识(通过分配到的工单来体现),以减轻 PR review 的压力。
 
 
 
 
   
 
   
-1Discuss
-Create a Jira ticket or mailing list discussion and reach 
consensus
-Agree on importance, relevance, scope of the ticket, discuss the 
implementation approach and find a committer willing to review and merge the 
change.
-Only committers can assign a Jira ticket.
+1讨论
+在 Jira 上创建工单或邮件列表讨论并达成共识
+商定重要性、相关性、工单的范围,讨论实现方案,并找到愿意审查和合并更改的 committer。
+只有 committers 才能分配 Jira 工单。
   
 
   
   
 
   
-2Implement
-Implement the change according to the Code Style and Quality Guide 
and the approach agreed upon in the Jira ticket. 
-Only start working on the implementation if there is consensus 
on the approach (e.g. you are assigned to the ticket)
+2实现
+根据代码样式和质量指南,以及 Jira 
工单中商定的方法去实现更改。 

Review comment:
   谢谢,已经使用这种方法通过验证,以后的翻译中会多注意。





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12194: [FLINK-17764][Examples]Update tips about the default planner when the planner parameter value is not recognized

2020-05-16 Thread GitBox


flinkbot commented on pull request #12194:
URL: https://github.com/apache/flink/pull/12194#issuecomment-629741058


   
   ## CI report:
   
   * b646f1df4c1dc9797662fac8fe7f54c6a3de4682 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12145: [FLINK-17428] [table-planner-blink] supports projection push down on new table source interface in blink planner

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12145:
URL: https://github.com/apache/flink/pull/12145#issuecomment-628490262


   
   ## CI report:
   
   * 3747e17ecf18448088dcb467d8bea240b00f12b6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1550)
 
   * 597608bb0ab2af04ee3143a6e82e5b3a5001cf48 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1583)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12176: [FLINK-17029][jdbc]Introduce a new JDBC connector with new property keys

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12176:
URL: https://github.com/apache/flink/pull/12176#issuecomment-629283127


   
   ## CI report:
   
   * 057d8fa644c9a203a753fe184de8e204ea81918e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1566)
 
   * e557b09c5dc9371c42542211651b79ea749cbf03 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1584)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12120: [FLINK-17547] Support unaligned checkpoints for records spilled to files

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12120:
URL: https://github.com/apache/flink/pull/12120#issuecomment-628150581


   
   ## CI report:
   
   * 95c57fc02f0f4c16e685df54b249f2386177126c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1571)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426214595



##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。
+ 基于 dev 邮件讨论的 Jira 工单需要链接到该讨论,并总结结果。
 
 
 
-**Requirements for a Jira ticket to get consensus:**
+**Jira 工单获得共识的要求:**
 
-  - Formal requirements
- - The *Title* describes the problem concisely.
- - The *Description* gives all the details needed to understand the 
problem or feature request.
- - The *Component* field is set: Many committers and contributors only 
focus on certain subsystems of Flink. Setting the appropriate component is 
important for getting their attention.
-  - There is **agreement** that the ticket solves a valid problem, and that it 
is a **good fit** for Flink.
-The Flink community considers the following aspects:
- - Does the contribution alter the behavior of features or components in a 
way that it may break previous users’ programs and setups? If yes, there needs 
to be a discussion and agreement that this change is desirable.
- - Does the contribution conceptually fit well into Flink? Is it too much 
of a special case such that it makes things more complicated for the common 
case, or bloats the abstractions / APIs?
- - Does the feature fit well into Flink’s architecture? Will it scale and 
keep Flink flexible for the future, or will the feature restrict Flink in the 
future?
- - Is the feature a significant new addition (rather than an improvement 
to an existing part)? If yes, will the Flink community commit to maintaining 
this feature?
- - Does this feature align well with Flink's roadmap and currently ongoing 
efforts?
- - Does the feature produce added value for Flink users or developers? Or 
does it introduce the risk of regression without adding relevant user or 
developer benefit?
- - Could the contribution live in another repository, e.g., Apache Bahir 
or another external repository?
- - Is this a contribution just for the sake of getting a commit in an open 
source project (fixing typos, style changes merely for taste reasons)
-  - There is **consensus** on how to solve the problem. This includes 
considerations such as
-- API and data backwards compatibility and migration strategies
-- Testing strategies
-- Impact on Flink's build time
-- Dependencies and their licenses
+  - 正式要求
+ - 描述问题的 *Title* 要简明扼要。
+ - 在 *Description* 中要提供了解问题或功能请求所需的所有详细信息。
+ - 要设置 *Component* 字段:许多 committers 和贡献者,只专注于 Flink 
的某些子系统。设置适当的组件标签对于引起他们的注意很重要。
+  - 社区*一致同意*使用工单是有效解决问题的方法,而且这**非常适合** Flink。 
+Flink 社区考虑了以下几个方面:
+ - 这种贡献是否会改变特性或组件的性能,从而破坏以前的用户程序和设置?如果是,那么就需要讨论并达成一致意见,证明这种改变是可取的。
+ - 这个贡献在概念上是否适合 Flink ?这是否是一种特殊场景?支持这种场景后会导致通用的场景变得更复杂,还是使整理抽象或者 APIs 
变得更臃肿?
+ - 该功能是否适合 Flink 的架构?它是否易扩展并保持 Flink 未来的灵活性,或者该功能将来会限制 Flink 吗?
+ - 该特性是一个重要的新增内容(而不是对现有内容的改进)吗?如果是,Flink 社区会承诺维护这个特性吗?
+ - 这个特性是否与 Flink 的路线图以及当前正在进行的工作内容一致?
+ - 该特性是否为 Flink 用户或开发人员带来了附加价值?或者它引入了回归的风险而没有给相关的用户或开发人员带来好处?
+ - 该贡献是否存在于其他仓库中,例如 Apache Bahir 或者其他第三方库?
+ - 这仅仅是为了在开源项目中获得提交而做出的贡献吗(仅仅是为了获得贡献而贡献,才去修复拼写错误、改变代码风格)?
+  - 在如何解决这个问题上已有**共识**,包括以下需要考虑的因素
+- API、数据向后兼容性和迁移策略
+- 测试策略
+- 对 Flink 构建时间的影响
+- 依赖关系及其许可证
 
-If a change is identified as a large or controversial change in the discussion 
on Jira, it might require a [Flink Improvement Proposal 
(FLIP)](https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals)
 or a discussion on the [dev mailing list]( {{ site.base 
}}/community.html#mailing-lists) to reach agreement and consensus.
+如果在 

[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426215133



##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。
+ 基于 dev 邮件讨论的 Jira 工单需要链接到该讨论,并总结结果。
 
 
 
-**Requirements for a Jira ticket to get consensus:**
+**Jira 工单获得共识的要求:**
 
-  - Formal requirements
- - The *Title* describes the problem concisely.
- - The *Description* gives all the details needed to understand the 
problem or feature request.
- - The *Component* field is set: Many committers and contributors only 
focus on certain subsystems of Flink. Setting the appropriate component is 
important for getting their attention.
-  - There is **agreement** that the ticket solves a valid problem, and that it 
is a **good fit** for Flink.
-The Flink community considers the following aspects:
- - Does the contribution alter the behavior of features or components in a 
way that it may break previous users’ programs and setups? If yes, there needs 
to be a discussion and agreement that this change is desirable.
- - Does the contribution conceptually fit well into Flink? Is it too much 
of a special case such that it makes things more complicated for the common 
case, or bloats the abstractions / APIs?
- - Does the feature fit well into Flink’s architecture? Will it scale and 
keep Flink flexible for the future, or will the feature restrict Flink in the 
future?
- - Is the feature a significant new addition (rather than an improvement 
to an existing part)? If yes, will the Flink community commit to maintaining 
this feature?
- - Does this feature align well with Flink's roadmap and currently ongoing 
efforts?
- - Does the feature produce added value for Flink users or developers? Or 
does it introduce the risk of regression without adding relevant user or 
developer benefit?
- - Could the contribution live in another repository, e.g., Apache Bahir 
or another external repository?
- - Is this a contribution just for the sake of getting a commit in an open 
source project (fixing typos, style changes merely for taste reasons)
-  - There is **consensus** on how to solve the problem. This includes 
considerations such as
-- API and data backwards compatibility and migration strategies
-- Testing strategies
-- Impact on Flink's build time
-- Dependencies and their licenses
+  - 正式要求
+ - 描述问题的 *Title* 要简明扼要。
+ - 在 *Description* 中要提供了解问题或功能请求所需的所有详细信息。
+ - 要设置 *Component* 字段:许多 committers 和贡献者,只专注于 Flink 
的某些子系统。设置适当的组件标签对于引起他们的注意很重要。
+  - 社区*一致同意*使用工单是有效解决问题的方法,而且这**非常适合** Flink。 
+Flink 社区考虑了以下几个方面:
+ - 这种贡献是否会改变特性或组件的性能,从而破坏以前的用户程序和设置?如果是,那么就需要讨论并达成一致意见,证明这种改变是可取的。
+ - 这个贡献在概念上是否适合 Flink ?这是否是一种特殊场景?支持这种场景后会导致通用的场景变得更复杂,还是使整理抽象或者 APIs 
变得更臃肿?
+ - 该功能是否适合 Flink 的架构?它是否易扩展并保持 Flink 未来的灵活性,或者该功能将来会限制 Flink 吗?
+ - 该特性是一个重要的新增内容(而不是对现有内容的改进)吗?如果是,Flink 社区会承诺维护这个特性吗?
+ - 这个特性是否与 Flink 的路线图以及当前正在进行的工作内容一致?
+ - 该特性是否为 Flink 用户或开发人员带来了附加价值?或者它引入了回归的风险而没有给相关的用户或开发人员带来好处?
+ - 该贡献是否存在于其他仓库中,例如 Apache Bahir 或者其他第三方库?
+ - 这仅仅是为了在开源项目中获得提交而做出的贡献吗(仅仅是为了获得贡献而贡献,才去修复拼写错误、改变代码风格)?
+  - 在如何解决这个问题上已有**共识**,包括以下需要考虑的因素
+- API、数据向后兼容性和迁移策略
+- 测试策略
+- 对 Flink 构建时间的影响
+- 依赖关系及其许可证
 
-If a change is identified as a large or controversial change in the discussion 
on Jira, it might require a [Flink Improvement Proposal 
(FLIP)](https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals)
 or a discussion on the [dev mailing list]( {{ site.base 
}}/community.html#mailing-lists) to reach agreement and consensus.
+如果在 

[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426215120



##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。
+ 基于 dev 邮件讨论的 Jira 工单需要链接到该讨论,并总结结果。
 
 
 
-**Requirements for a Jira ticket to get consensus:**
+**Jira 工单获得共识的要求:**
 
-  - Formal requirements
- - The *Title* describes the problem concisely.
- - The *Description* gives all the details needed to understand the 
problem or feature request.
- - The *Component* field is set: Many committers and contributors only 
focus on certain subsystems of Flink. Setting the appropriate component is 
important for getting their attention.
-  - There is **agreement** that the ticket solves a valid problem, and that it 
is a **good fit** for Flink.
-The Flink community considers the following aspects:
- - Does the contribution alter the behavior of features or components in a 
way that it may break previous users’ programs and setups? If yes, there needs 
to be a discussion and agreement that this change is desirable.
- - Does the contribution conceptually fit well into Flink? Is it too much 
of a special case such that it makes things more complicated for the common 
case, or bloats the abstractions / APIs?
- - Does the feature fit well into Flink’s architecture? Will it scale and 
keep Flink flexible for the future, or will the feature restrict Flink in the 
future?
- - Is the feature a significant new addition (rather than an improvement 
to an existing part)? If yes, will the Flink community commit to maintaining 
this feature?
- - Does this feature align well with Flink's roadmap and currently ongoing 
efforts?
- - Does the feature produce added value for Flink users or developers? Or 
does it introduce the risk of regression without adding relevant user or 
developer benefit?
- - Could the contribution live in another repository, e.g., Apache Bahir 
or another external repository?
- - Is this a contribution just for the sake of getting a commit in an open 
source project (fixing typos, style changes merely for taste reasons)
-  - There is **consensus** on how to solve the problem. This includes 
considerations such as
-- API and data backwards compatibility and migration strategies
-- Testing strategies
-- Impact on Flink's build time
-- Dependencies and their licenses
+  - 正式要求
+ - 描述问题的 *Title* 要简明扼要。
+ - 在 *Description* 中要提供了解问题或功能请求所需的所有详细信息。
+ - 要设置 *Component* 字段:许多 committers 和贡献者,只专注于 Flink 
的某些子系统。设置适当的组件标签对于引起他们的注意很重要。
+  - 社区*一致同意*使用工单是有效解决问题的方法,而且这**非常适合** Flink。 
+Flink 社区考虑了以下几个方面:
+ - 这种贡献是否会改变特性或组件的性能,从而破坏以前的用户程序和设置?如果是,那么就需要讨论并达成一致意见,证明这种改变是可取的。
+ - 这个贡献在概念上是否适合 Flink ?这是否是一种特殊场景?支持这种场景后会导致通用的场景变得更复杂,还是使整理抽象或者 APIs 
变得更臃肿?
+ - 该功能是否适合 Flink 的架构?它是否易扩展并保持 Flink 未来的灵活性,或者该功能将来会限制 Flink 吗?
+ - 该特性是一个重要的新增内容(而不是对现有内容的改进)吗?如果是,Flink 社区会承诺维护这个特性吗?
+ - 这个特性是否与 Flink 的路线图以及当前正在进行的工作内容一致?
+ - 该特性是否为 Flink 用户或开发人员带来了附加价值?或者它引入了回归的风险而没有给相关的用户或开发人员带来好处?
+ - 该贡献是否存在于其他仓库中,例如 Apache Bahir 或者其他第三方库?
+ - 这仅仅是为了在开源项目中获得提交而做出的贡献吗(仅仅是为了获得贡献而贡献,才去修复拼写错误、改变代码风格)?
+  - 在如何解决这个问题上已有**共识**,包括以下需要考虑的因素
+- API、数据向后兼容性和迁移策略
+- 测试策略
+- 对 Flink 构建时间的影响
+- 依赖关系及其许可证
 
-If a change is identified as a large or controversial change in the discussion 
on Jira, it might require a [Flink Improvement Proposal 
(FLIP)](https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals)
 or a discussion on the [dev mailing list]( {{ site.base 
}}/community.html#mailing-lists) to reach agreement and consensus.
+如果在 

[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426215078



##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。
+ 基于 dev 邮件讨论的 Jira 工单需要链接到该讨论,并总结结果。
 
 
 
-**Requirements for a Jira ticket to get consensus:**
+**Jira 工单获得共识的要求:**
 
-  - Formal requirements
- - The *Title* describes the problem concisely.
- - The *Description* gives all the details needed to understand the 
problem or feature request.
- - The *Component* field is set: Many committers and contributors only 
focus on certain subsystems of Flink. Setting the appropriate component is 
important for getting their attention.
-  - There is **agreement** that the ticket solves a valid problem, and that it 
is a **good fit** for Flink.
-The Flink community considers the following aspects:
- - Does the contribution alter the behavior of features or components in a 
way that it may break previous users’ programs and setups? If yes, there needs 
to be a discussion and agreement that this change is desirable.
- - Does the contribution conceptually fit well into Flink? Is it too much 
of a special case such that it makes things more complicated for the common 
case, or bloats the abstractions / APIs?
- - Does the feature fit well into Flink’s architecture? Will it scale and 
keep Flink flexible for the future, or will the feature restrict Flink in the 
future?
- - Is the feature a significant new addition (rather than an improvement 
to an existing part)? If yes, will the Flink community commit to maintaining 
this feature?
- - Does this feature align well with Flink's roadmap and currently ongoing 
efforts?
- - Does the feature produce added value for Flink users or developers? Or 
does it introduce the risk of regression without adding relevant user or 
developer benefit?
- - Could the contribution live in another repository, e.g., Apache Bahir 
or another external repository?
- - Is this a contribution just for the sake of getting a commit in an open 
source project (fixing typos, style changes merely for taste reasons)
-  - There is **consensus** on how to solve the problem. This includes 
considerations such as
-- API and data backwards compatibility and migration strategies
-- Testing strategies
-- Impact on Flink's build time
-- Dependencies and their licenses
+  - 正式要求
+ - 描述问题的 *Title* 要简明扼要。
+ - 在 *Description* 中要提供了解问题或功能请求所需的所有详细信息。
+ - 要设置 *Component* 字段:许多 committers 和贡献者,只专注于 Flink 
的某些子系统。设置适当的组件标签对于引起他们的注意很重要。
+  - 社区*一致同意*使用工单是有效解决问题的方法,而且这**非常适合** Flink。 
+Flink 社区考虑了以下几个方面:
+ - 这种贡献是否会改变特性或组件的性能,从而破坏以前的用户程序和设置?如果是,那么就需要讨论并达成一致意见,证明这种改变是可取的。
+ - 这个贡献在概念上是否适合 Flink ?这是否是一种特殊场景?支持这种场景后会导致通用的场景变得更复杂,还是使整理抽象或者 APIs 
变得更臃肿?
+ - 该功能是否适合 Flink 的架构?它是否易扩展并保持 Flink 未来的灵活性,或者该功能将来会限制 Flink 吗?
+ - 该特性是一个重要的新增内容(而不是对现有内容的改进)吗?如果是,Flink 社区会承诺维护这个特性吗?
+ - 这个特性是否与 Flink 的路线图以及当前正在进行的工作内容一致?
+ - 该特性是否为 Flink 用户或开发人员带来了附加价值?或者它引入了回归的风险而没有给相关的用户或开发人员带来好处?
+ - 该贡献是否存在于其他仓库中,例如 Apache Bahir 或者其他第三方库?
+ - 这仅仅是为了在开源项目中获得提交而做出的贡献吗(仅仅是为了获得贡献而贡献,才去修复拼写错误、改变代码风格)?
+  - 在如何解决这个问题上已有**共识**,包括以下需要考虑的因素
+- API、数据向后兼容性和迁移策略
+- 测试策略
+- 对 Flink 构建时间的影响
+- 依赖关系及其许可证
 
-If a change is identified as a large or controversial change in the discussion 
on Jira, it might require a [Flink Improvement Proposal 
(FLIP)](https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals)
 or a discussion on the [dev mailing list]( {{ site.base 
}}/community.html#mailing-lists) to reach agreement and consensus.
+如果在 

[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426215097



##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。
+ 基于 dev 邮件讨论的 Jira 工单需要链接到该讨论,并总结结果。
 
 
 
-**Requirements for a Jira ticket to get consensus:**
+**Jira 工单获得共识的要求:**
 
-  - Formal requirements
- - The *Title* describes the problem concisely.
- - The *Description* gives all the details needed to understand the 
problem or feature request.
- - The *Component* field is set: Many committers and contributors only 
focus on certain subsystems of Flink. Setting the appropriate component is 
important for getting their attention.
-  - There is **agreement** that the ticket solves a valid problem, and that it 
is a **good fit** for Flink.
-The Flink community considers the following aspects:
- - Does the contribution alter the behavior of features or components in a 
way that it may break previous users’ programs and setups? If yes, there needs 
to be a discussion and agreement that this change is desirable.
- - Does the contribution conceptually fit well into Flink? Is it too much 
of a special case such that it makes things more complicated for the common 
case, or bloats the abstractions / APIs?
- - Does the feature fit well into Flink’s architecture? Will it scale and 
keep Flink flexible for the future, or will the feature restrict Flink in the 
future?
- - Is the feature a significant new addition (rather than an improvement 
to an existing part)? If yes, will the Flink community commit to maintaining 
this feature?
- - Does this feature align well with Flink's roadmap and currently ongoing 
efforts?
- - Does the feature produce added value for Flink users or developers? Or 
does it introduce the risk of regression without adding relevant user or 
developer benefit?
- - Could the contribution live in another repository, e.g., Apache Bahir 
or another external repository?
- - Is this a contribution just for the sake of getting a commit in an open 
source project (fixing typos, style changes merely for taste reasons)
-  - There is **consensus** on how to solve the problem. This includes 
considerations such as
-- API and data backwards compatibility and migration strategies
-- Testing strategies
-- Impact on Flink's build time
-- Dependencies and their licenses
+  - 正式要求
+ - 描述问题的 *Title* 要简明扼要。
+ - 在 *Description* 中要提供了解问题或功能请求所需的所有详细信息。
+ - 要设置 *Component* 字段:许多 committers 和贡献者,只专注于 Flink 
的某些子系统。设置适当的组件标签对于引起他们的注意很重要。
+  - 社区*一致同意*使用工单是有效解决问题的方法,而且这**非常适合** Flink。 
+Flink 社区考虑了以下几个方面:
+ - 这种贡献是否会改变特性或组件的性能,从而破坏以前的用户程序和设置?如果是,那么就需要讨论并达成一致意见,证明这种改变是可取的。
+ - 这个贡献在概念上是否适合 Flink ?这是否是一种特殊场景?支持这种场景后会导致通用的场景变得更复杂,还是使整理抽象或者 APIs 
变得更臃肿?
+ - 该功能是否适合 Flink 的架构?它是否易扩展并保持 Flink 未来的灵活性,或者该功能将来会限制 Flink 吗?
+ - 该特性是一个重要的新增内容(而不是对现有内容的改进)吗?如果是,Flink 社区会承诺维护这个特性吗?
+ - 这个特性是否与 Flink 的路线图以及当前正在进行的工作内容一致?
+ - 该特性是否为 Flink 用户或开发人员带来了附加价值?或者它引入了回归的风险而没有给相关的用户或开发人员带来好处?
+ - 该贡献是否存在于其他仓库中,例如 Apache Bahir 或者其他第三方库?
+ - 这仅仅是为了在开源项目中获得提交而做出的贡献吗(仅仅是为了获得贡献而贡献,才去修复拼写错误、改变代码风格)?
+  - 在如何解决这个问题上已有**共识**,包括以下需要考虑的因素
+- API、数据向后兼容性和迁移策略
+- 测试策略
+- 对 Flink 构建时间的影响
+- 依赖关系及其许可证
 
-If a change is identified as a large or controversial change in the discussion 
on Jira, it might require a [Flink Improvement Proposal 
(FLIP)](https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals)
 or a discussion on the [dev mailing list]( {{ site.base 
}}/community.html#mailing-lists) to reach agreement and consensus.
+如果在 

[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426215066



##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。
+ 基于 dev 邮件讨论的 Jira 工单需要链接到该讨论,并总结结果。
 
 
 
-**Requirements for a Jira ticket to get consensus:**
+**Jira 工单获得共识的要求:**
 
-  - Formal requirements
- - The *Title* describes the problem concisely.
- - The *Description* gives all the details needed to understand the 
problem or feature request.
- - The *Component* field is set: Many committers and contributors only 
focus on certain subsystems of Flink. Setting the appropriate component is 
important for getting their attention.
-  - There is **agreement** that the ticket solves a valid problem, and that it 
is a **good fit** for Flink.
-The Flink community considers the following aspects:
- - Does the contribution alter the behavior of features or components in a 
way that it may break previous users’ programs and setups? If yes, there needs 
to be a discussion and agreement that this change is desirable.
- - Does the contribution conceptually fit well into Flink? Is it too much 
of a special case such that it makes things more complicated for the common 
case, or bloats the abstractions / APIs?
- - Does the feature fit well into Flink’s architecture? Will it scale and 
keep Flink flexible for the future, or will the feature restrict Flink in the 
future?
- - Is the feature a significant new addition (rather than an improvement 
to an existing part)? If yes, will the Flink community commit to maintaining 
this feature?
- - Does this feature align well with Flink's roadmap and currently ongoing 
efforts?
- - Does the feature produce added value for Flink users or developers? Or 
does it introduce the risk of regression without adding relevant user or 
developer benefit?
- - Could the contribution live in another repository, e.g., Apache Bahir 
or another external repository?
- - Is this a contribution just for the sake of getting a commit in an open 
source project (fixing typos, style changes merely for taste reasons)
-  - There is **consensus** on how to solve the problem. This includes 
considerations such as
-- API and data backwards compatibility and migration strategies
-- Testing strategies
-- Impact on Flink's build time
-- Dependencies and their licenses
+  - 正式要求
+ - 描述问题的 *Title* 要简明扼要。
+ - 在 *Description* 中要提供了解问题或功能请求所需的所有详细信息。
+ - 要设置 *Component* 字段:许多 committers 和贡献者,只专注于 Flink 
的某些子系统。设置适当的组件标签对于引起他们的注意很重要。
+  - 社区*一致同意*使用工单是有效解决问题的方法,而且这**非常适合** Flink。 
+Flink 社区考虑了以下几个方面:
+ - 这种贡献是否会改变特性或组件的性能,从而破坏以前的用户程序和设置?如果是,那么就需要讨论并达成一致意见,证明这种改变是可取的。
+ - 这个贡献在概念上是否适合 Flink ?这是否是一种特殊场景?支持这种场景后会导致通用的场景变得更复杂,还是使整理抽象或者 APIs 
变得更臃肿?
+ - 该功能是否适合 Flink 的架构?它是否易扩展并保持 Flink 未来的灵活性,或者该功能将来会限制 Flink 吗?
+ - 该特性是一个重要的新增内容(而不是对现有内容的改进)吗?如果是,Flink 社区会承诺维护这个特性吗?
+ - 这个特性是否与 Flink 的路线图以及当前正在进行的工作内容一致?
+ - 该特性是否为 Flink 用户或开发人员带来了附加价值?或者它引入了回归的风险而没有给相关的用户或开发人员带来好处?
+ - 该贡献是否存在于其他仓库中,例如 Apache Bahir 或者其他第三方库?
+ - 这仅仅是为了在开源项目中获得提交而做出的贡献吗(仅仅是为了获得贡献而贡献,才去修复拼写错误、改变代码风格)?
+  - 在如何解决这个问题上已有**共识**,包括以下需要考虑的因素
+- API、数据向后兼容性和迁移策略
+- 测试策略
+- 对 Flink 构建时间的影响
+- 依赖关系及其许可证
 
-If a change is identified as a large or controversial change in the discussion 
on Jira, it might require a [Flink Improvement Proposal 
(FLIP)](https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals)
 or a discussion on the [dev mailing list]( {{ site.base 
}}/community.html#mailing-lists) to reach agreement and consensus.
+如果在 

[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426215044



##
File path: contributing/contribute-code.zh.md
##
@@ -126,7 +126,7 @@ Apache Flink is maintained, improved, and extended by code 
contributions of volu
   
 
   
-Note: trivial hot fixes such as typos or syntax errors can be 
opened as a [hotfix] pull request, without a Jira ticket.
+注意:诸如拼写错误或语法错误之类的简单热修复可以在创建 pull request 时,使用 [hotfix] 标识,可以不创建 
Jira 工单。

Review comment:
   “注意:诸如拼写错误或语法错误之类的简单热修复可以不用创建 Jira 工单,直接提交 [hotfix] pull request 
即可。”





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426215023



##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。

Review comment:
   已统一修改。





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin commented on pull request #12122: [FLINK-15102] Add continuousSource() method to StreamExecutionEnvironment.

2020-05-16 Thread GitBox


becketqin commented on pull request #12122:
URL: https://github.com/apache/flink/pull/12122#issuecomment-629739829


   Thanks @StephanEwen. Apologies that I somehow missed some of your previous 
comments...
   
   I'll merge the patch with the following changes.
   
   1. `SourceReader` actually does not have to extend `Serializable`. So I just 
removed that from the interface. I also fixed other places as suggested where 
there is a warning.
   2. Change the `CoordinatedSourceITCase` to compare the full list.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426214595



##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。
+ 基于 dev 邮件讨论的 Jira 工单需要链接到该讨论,并总结结果。
 
 
 
-**Requirements for a Jira ticket to get consensus:**
+**Jira 工单获得共识的要求:**
 
-  - Formal requirements
- - The *Title* describes the problem concisely.
- - The *Description* gives all the details needed to understand the 
problem or feature request.
- - The *Component* field is set: Many committers and contributors only 
focus on certain subsystems of Flink. Setting the appropriate component is 
important for getting their attention.
-  - There is **agreement** that the ticket solves a valid problem, and that it 
is a **good fit** for Flink.
-The Flink community considers the following aspects:
- - Does the contribution alter the behavior of features or components in a 
way that it may break previous users’ programs and setups? If yes, there needs 
to be a discussion and agreement that this change is desirable.
- - Does the contribution conceptually fit well into Flink? Is it too much 
of a special case such that it makes things more complicated for the common 
case, or bloats the abstractions / APIs?
- - Does the feature fit well into Flink’s architecture? Will it scale and 
keep Flink flexible for the future, or will the feature restrict Flink in the 
future?
- - Is the feature a significant new addition (rather than an improvement 
to an existing part)? If yes, will the Flink community commit to maintaining 
this feature?
- - Does this feature align well with Flink's roadmap and currently ongoing 
efforts?
- - Does the feature produce added value for Flink users or developers? Or 
does it introduce the risk of regression without adding relevant user or 
developer benefit?
- - Could the contribution live in another repository, e.g., Apache Bahir 
or another external repository?
- - Is this a contribution just for the sake of getting a commit in an open 
source project (fixing typos, style changes merely for taste reasons)
-  - There is **consensus** on how to solve the problem. This includes 
considerations such as
-- API and data backwards compatibility and migration strategies
-- Testing strategies
-- Impact on Flink's build time
-- Dependencies and their licenses
+  - 正式要求
+ - 描述问题的 *Title* 要简明扼要。
+ - 在 *Description* 中要提供了解问题或功能请求所需的所有详细信息。
+ - 要设置 *Component* 字段:许多 committers 和贡献者,只专注于 Flink 
的某些子系统。设置适当的组件标签对于引起他们的注意很重要。
+  - 社区*一致同意*使用工单是有效解决问题的方法,而且这**非常适合** Flink。 
+Flink 社区考虑了以下几个方面:
+ - 这种贡献是否会改变特性或组件的性能,从而破坏以前的用户程序和设置?如果是,那么就需要讨论并达成一致意见,证明这种改变是可取的。
+ - 这个贡献在概念上是否适合 Flink ?这是否是一种特殊场景?支持这种场景后会导致通用的场景变得更复杂,还是使整理抽象或者 APIs 
变得更臃肿?
+ - 该功能是否适合 Flink 的架构?它是否易扩展并保持 Flink 未来的灵活性,或者该功能将来会限制 Flink 吗?
+ - 该特性是一个重要的新增内容(而不是对现有内容的改进)吗?如果是,Flink 社区会承诺维护这个特性吗?
+ - 这个特性是否与 Flink 的路线图以及当前正在进行的工作内容一致?
+ - 该特性是否为 Flink 用户或开发人员带来了附加价值?或者它引入了回归的风险而没有给相关的用户或开发人员带来好处?
+ - 该贡献是否存在于其他仓库中,例如 Apache Bahir 或者其他第三方库?
+ - 这仅仅是为了在开源项目中获得提交而做出的贡献吗(仅仅是为了获得贡献而贡献,才去修复拼写错误、改变代码风格)?
+  - 在如何解决这个问题上已有**共识**,包括以下需要考虑的因素
+- API、数据向后兼容性和迁移策略
+- 测试策略
+- 对 Flink 构建时间的影响
+- 依赖关系及其许可证
 
-If a change is identified as a large or controversial change in the discussion 
on Jira, it might require a [Flink Improvement Proposal 
(FLIP)](https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals)
 or a discussion on the [dev mailing list]( {{ site.base 
}}/community.html#mailing-lists) to reach agreement and consensus.
+如果在 

[GitHub] [flink] flinkbot commented on pull request #12194: [FLINK-17764][Examples]Update tips about the default planner when the planner parameter value is not recognized

2020-05-16 Thread GitBox


flinkbot commented on pull request #12194:
URL: https://github.com/apache/flink/pull/12194#issuecomment-629739324


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b646f1df4c1dc9797662fac8fe7f54c6a3de4682 (Sun May 17 
03:59:18 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12192: [FLINK-17758][runtime] Remove unused AdaptedRestartPipelinedRegionStrategyNG

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12192:
URL: https://github.com/apache/flink/pull/12192#issuecomment-629673624


   
   ## CI report:
   
   * 15ca64e3a20101239b30b90064f3b0f35238c828 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1568)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12176: [FLINK-17029][jdbc]Introduce a new JDBC connector with new property keys

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12176:
URL: https://github.com/apache/flink/pull/12176#issuecomment-629283127


   
   ## CI report:
   
   * 057d8fa644c9a203a753fe184de8e204ea81918e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1566)
 
   * e557b09c5dc9371c42542211651b79ea749cbf03 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12147: [FLINK-17653] FLIP-126: Unify (and separate) Watermark Assigners

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12147:
URL: https://github.com/apache/flink/pull/12147#issuecomment-628530122


   
   ## CI report:
   
   * 74cd43706dcd385e51e90fb10cb60c4004e5debc UNKNOWN
   * 45254c90a497b9526cb1c5e891d0a11abfaa03ea Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1570)
 
   * e4bf8eef2b63bdb446eac0bb18cf1efbe45aa68f UNKNOWN
   * 48df53dfbe10fd6181fbcc3cd36536a11a8e463f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1572)
 
   * 7015f3b1ccbf01ff0346121ea51670e14cdd667c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1575)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12145: [FLINK-17428] [table-planner-blink] supports projection push down on new table source interface in blink planner

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12145:
URL: https://github.com/apache/flink/pull/12145#issuecomment-628490262


   
   ## CI report:
   
   * 3747e17ecf18448088dcb467d8bea240b00f12b6 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1550)
 
   * 597608bb0ab2af04ee3143a6e82e5b3a5001cf48 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12029: [FLINK-17451][sql-parser][table-planner-blink][hive] Implement view D…

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12029:
URL: https://github.com/apache/flink/pull/12029#issuecomment-625673739


   
   ## CI report:
   
   * 9405ea4470dc022ffb514f603396fc6bb2582835 UNKNOWN
   * c1ad4b5d93e10b76fad14269f46fa62e4d771bed Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1554)
 
   * cbf5ef6f14500b63eafb43901b3efa6db28c22b2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   * 9af69eb96e9a0ddaff4937e9d926feff92439f32 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1552)
 
   * aa6298086f01efe5d6ddd1356d7e289804f57a9b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426214472



##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。
+ 基于 dev 邮件讨论的 Jira 工单需要链接到该讨论,并总结结果。
 
 
 
-**Requirements for a Jira ticket to get consensus:**
+**Jira 工单获得共识的要求:**
 
-  - Formal requirements
- - The *Title* describes the problem concisely.
- - The *Description* gives all the details needed to understand the 
problem or feature request.
- - The *Component* field is set: Many committers and contributors only 
focus on certain subsystems of Flink. Setting the appropriate component is 
important for getting their attention.
-  - There is **agreement** that the ticket solves a valid problem, and that it 
is a **good fit** for Flink.
-The Flink community considers the following aspects:
- - Does the contribution alter the behavior of features or components in a 
way that it may break previous users’ programs and setups? If yes, there needs 
to be a discussion and agreement that this change is desirable.
- - Does the contribution conceptually fit well into Flink? Is it too much 
of a special case such that it makes things more complicated for the common 
case, or bloats the abstractions / APIs?
- - Does the feature fit well into Flink’s architecture? Will it scale and 
keep Flink flexible for the future, or will the feature restrict Flink in the 
future?
- - Is the feature a significant new addition (rather than an improvement 
to an existing part)? If yes, will the Flink community commit to maintaining 
this feature?
- - Does this feature align well with Flink's roadmap and currently ongoing 
efforts?
- - Does the feature produce added value for Flink users or developers? Or 
does it introduce the risk of regression without adding relevant user or 
developer benefit?
- - Could the contribution live in another repository, e.g., Apache Bahir 
or another external repository?
- - Is this a contribution just for the sake of getting a commit in an open 
source project (fixing typos, style changes merely for taste reasons)
-  - There is **consensus** on how to solve the problem. This includes 
considerations such as
-- API and data backwards compatibility and migration strategies
-- Testing strategies
-- Impact on Flink's build time
-- Dependencies and their licenses
+  - 正式要求
+ - 描述问题的 *Title* 要简明扼要。
+ - 在 *Description* 中要提供了解问题或功能请求所需的所有详细信息。
+ - 要设置 *Component* 字段:许多 committers 和贡献者,只专注于 Flink 
的某些子系统。设置适当的组件标签对于引起他们的注意很重要。
+  - 社区*一致同意*使用工单是有效解决问题的方法,而且这**非常适合** Flink。 
+Flink 社区考虑了以下几个方面:
+ - 这种贡献是否会改变特性或组件的性能,从而破坏以前的用户程序和设置?如果是,那么就需要讨论并达成一致意见,证明这种改变是可取的。
+ - 这个贡献在概念上是否适合 Flink ?这是否是一种特殊场景?支持这种场景后会导致通用的场景变得更复杂,还是使整理抽象或者 APIs 
变得更臃肿?
+ - 该功能是否适合 Flink 的架构?它是否易扩展并保持 Flink 未来的灵活性,或者该功能将来会限制 Flink 吗?
+ - 该特性是一个重要的新增内容(而不是对现有内容的改进)吗?如果是,Flink 社区会承诺维护这个特性吗?
+ - 这个特性是否与 Flink 的路线图以及当前正在进行的工作内容一致?
+ - 该特性是否为 Flink 用户或开发人员带来了附加价值?或者它引入了回归的风险而没有给相关的用户或开发人员带来好处?
+ - 该贡献是否存在于其他仓库中,例如 Apache Bahir 或者其他第三方库?
+ - 这仅仅是为了在开源项目中获得提交而做出的贡献吗(仅仅是为了获得贡献而贡献,才去修复拼写错误、改变代码风格)?
+  - 在如何解决这个问题上已有**共识**,包括以下需要考虑的因素
+- API、数据向后兼容性和迁移策略
+- 测试策略
+- 对 Flink 构建时间的影响
+- 依赖关系及其许可证
 
-If a change is identified as a large or controversial change in the discussion 
on Jira, it might require a [Flink Improvement Proposal 
(FLIP)](https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals)
 or a discussion on the [dev mailing list]( {{ site.base 
}}/community.html#mailing-lists) to reach agreement and consensus.
+如果在 

[jira] [Updated] (FLINK-17764) Update tips about the default planner when the planner parameter value is not recognized

2020-05-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17764:
---
Labels: pull-request-available  (was: )

> Update tips about the default planner when the planner parameter value is not 
> recognized
> 
>
> Key: FLINK-17764
> URL: https://issues.apache.org/jira/browse/FLINK-17764
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: xushiwei
>Assignee: xushiwei
>Priority: Minor
>  Labels: pull-request-available
>
> This default planner has been set to blink in the code.
> However, when the planner parameter value is not recognized, the default 
> planner is prompted to be flink. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Shih-Wei-Hsu opened a new pull request #12194: [FLINK-17764][Examples]Update tips about the default planner when the planner parameter value is not recognized

2020-05-16 Thread GitBox


Shih-Wei-Hsu opened a new pull request #12194:
URL: https://github.com/apache/flink/pull/12194


   
   
   ## What is the purpose of the change
   
   Keep the default planner in the prompt consistent with the code.
   
   ## Brief change log
   
   Change tips about the default planner to blink when the planner parameter 
value is not recognized , because this default planner has been set to blink in 
the code.
   
   ## Verifying this change
   
   trivial change
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] klion26 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


klion26 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426209811



##
File path: contributing/contribute-code.zh.md
##
@@ -126,7 +126,7 @@ Apache Flink is maintained, improved, and extended by code 
contributions of volu
   
 
   
-Note: trivial hot fixes such as typos or syntax errors can be 
opened as a [hotfix] pull request, without a Jira ticket.
+注意:诸如拼写错误或语法错误之类的简单热修复可以在创建 pull request 时,使用 [hotfix] 标识,可以不创建 
Jira 工单。

Review comment:
   这句话的意思表达清楚了,如果改成 “注意:诸如拼写错误或语法错误之类的简单热修复可以不创建 Jira 工单,直接提交 
[hotfix] pull request” 你觉得会好一点吗?
   主要是后面的“可以不创建 Jira 工单” 感觉应该放在前面(不创建工单,然后直接提交 pull request),然后两个“可以”读起来有一点重复的感觉

##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。

Review comment:
   “打开” -> “创建"?

##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。
+ 基于 dev 邮件讨论的 Jira 工单需要链接到该讨论,并总结结果。
 
 
 
-**Requirements for a Jira ticket to get consensus:**
+**Jira 工单获得共识的要求:**
 
-  - Formal requirements
- - The *Title* describes the problem concisely.
- - The *Description* gives all the details needed to understand the 
problem or feature request.
- - The *Component* field is set: Many committers and contributors only 
focus on certain subsystems of Flink. Setting the appropriate component is 
important for getting their attention.
-  - There is **agreement** that the ticket solves a valid problem, and that it 
is a **good fit** for Flink.
-The Flink community considers the following aspects:
- - Does the contribution alter the behavior of features or components in a 
way that it may break previous users’ programs and setups? If yes, there needs 
to be a discussion and agreement that this change is desirable.
- - Does the contribution conceptually fit well into Flink? Is it too much 
of a special case such that it makes things more complicated for the common 
case, or bloats the abstractions / APIs?
- - Does the feature fit well into Flink’s architecture? Will it scale and 
keep Flink flexible for the future, or will the feature restrict Flink in the 
future?
- - Is the feature a significant new addition (rather than an improvement 
to an existing part)? If yes, will 

[jira] [Commented] (FLINK-17762) Postgres Catalog should pass table's primary key to catalogTable

2020-05-16 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109299#comment-17109299
 ] 

Jark Wu commented on FLINK-17762:
-

Btw, this one is blocked by FLINK-17029

> Postgres Catalog should pass table's primary key to catalogTable
> 
>
> Key: FLINK-17762
> URL: https://issues.apache.org/jira/browse/FLINK-17762
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Leonard Xu
>Priority: Major
>
> for upsert query, if the table comes from a catalog rather than create in 
> FLINK,  Postgres Catalog should pass table's primary key to catalogTable so 
> that JdbcDynamicTableSink can determine to work on upsert mode or append only 
> mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #11854: [FLINK-17407] Introduce external resource framework

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #11854:
URL: https://github.com/apache/flink/pull/11854#issuecomment-617586491


   
   ## CI report:
   
   * bddb0e274da11bbe99d15c6e0bb55e8d8c0e658a UNKNOWN
   * dc7a9c5c7d1fac82518815b9277809dfb82ddaac UNKNOWN
   * 2238559b0e2245e77204e7c7d0ef34c7a97e3766 UNKNOWN
   * 8be6c46114192d31061079e547fc125c08b916b1 UNKNOWN
   * d255903bbe5da86d53547ad6a3d5ddf02e63b913 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1564)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17762) Postgres Catalog should pass table's primary key to catalogTable

2020-05-16 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109297#comment-17109297
 ] 

Jark Wu commented on FLINK-17762:
-

Yes. I think we can reuse the implementations. But we may only need to support 
PRIMARY KEY in it. I would like to treat is as a bug fix, otherwise, the tables 
registered by postgres catalog can't be used as a sink in group by query. Would 
you like to take this [~f.pompermaier]?

> Postgres Catalog should pass table's primary key to catalogTable
> 
>
> Key: FLINK-17762
> URL: https://issues.apache.org/jira/browse/FLINK-17762
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Leonard Xu
>Priority: Major
>
> for upsert query, if the table comes from a catalog rather than create in 
> FLINK,  Postgres Catalog should pass table's primary key to catalogTable so 
> that JdbcDynamicTableSink can determine to work on upsert mode or append only 
> mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17630) Implement format factory for Avro serialization and deserialization schema

2020-05-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-17630.
---
Resolution: Duplicate

> Implement format factory for Avro serialization and deserialization schema
> --
>
> Key: FLINK-17630
> URL: https://issues.apache.org/jira/browse/FLINK-17630
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jark Wu
>Assignee: Danny Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17757) Implement format factory for Avro serialization and deseriazation schema of RowData type

2020-05-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-17757:
---

Assignee: Danny Chen

> Implement format factory for Avro serialization and deseriazation schema of 
> RowData type
> 
>
> Key: FLINK-17757
> URL: https://issues.apache.org/jira/browse/FLINK-17757
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Danny Chen
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17353) Broken links in Flink docs master

2020-05-16 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109295#comment-17109295
 ] 

Jark Wu commented on FLINK-17353:
-

Assigned to you [~yangyichao]

> Broken links in Flink docs master
> -
>
> Key: FLINK-17353
> URL: https://issues.apache.org/jira/browse/FLINK-17353
> Project: Flink
>  Issue Type: Bug
>  Components: chinese-translation, Documentation
>Reporter: Seth Wiesman
>Assignee: Yichao Yang
>Priority: Major
>
> http://localhost:4000/concepts/programming-model.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/concepts/runtime.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/internals/stream_checkpointing.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/concepts/flink-architecture.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/ops/memory/config.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/concepts/programming-model.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/concepts/runtime.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/dev/dev/table/python/installation.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/internals/stream_checkpointing.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/dev/table/sql.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/ops/memory/config.html:
> Remote file does not exist -- broken link!!!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17353) Broken links in Flink docs master

2020-05-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-17353:
---

Assignee: Yichao Yang

> Broken links in Flink docs master
> -
>
> Key: FLINK-17353
> URL: https://issues.apache.org/jira/browse/FLINK-17353
> Project: Flink
>  Issue Type: Bug
>  Components: chinese-translation, Documentation
>Reporter: Seth Wiesman
>Assignee: Yichao Yang
>Priority: Major
>
> http://localhost:4000/concepts/programming-model.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/concepts/runtime.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/internals/stream_checkpointing.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/concepts/flink-architecture.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/ops/memory/config.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/concepts/programming-model.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/concepts/runtime.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/dev/dev/table/python/installation.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/internals/stream_checkpointing.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/dev/table/sql.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/ops/memory/config.html:
> Remote file does not exist -- broken link!!!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12188: [FLINK-17728] [sql-client] sql client supports parser statements via sql parser

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12188:
URL: https://github.com/apache/flink/pull/12188#issuecomment-629595773


   
   ## CI report:
   
   * 7111bcfcc655d8d6b0ee5f9ff66e80e732f7272e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1504)
 
   * 615cd355c672d2275c8daa85a9cf4d68ae465211 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1582)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KurtYoung commented on a change in pull request #12145: [FLINK-17428] [table-planner-blink] supports projection push down on new table source interface in blink planner

2020-05-16 Thread GitBox


KurtYoung commented on a change in pull request #12145:
URL: https://github.com/apache/flink/pull/12145#discussion_r426211541



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushProjectIntoTableSourceScanRule.java
##
@@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import 
org.apache.flink.table.connector.source.abilities.SupportsProjectionPushDown;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeRewriter;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.logical.LogicalProject;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rel.rules.ProjectRemoveRule;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexNode;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Planner rule that pushes a {@link LogicalProject} into a {@link 
LogicalTableScan}
+ * which wraps a {@link SupportsProjectionPushDown} dynamic table source.
+ *
+ * NOTES: This rule does not support nested fields push down now,
+ * instead it will push the top-level column down just like non-nested fields.
+ */
+public class PushProjectIntoTableSourceScanRule extends RelOptRule {
+   public static final PushProjectIntoTableSourceScanRule INSTANCE = new 
PushProjectIntoTableSourceScanRule();
+
+   public PushProjectIntoTableSourceScanRule() {
+   super(operand(LogicalProject.class,
+   operand(LogicalTableScan.class, none())),
+   "PushProjectIntoTableSourceScanRule");
+   }
+
+   @Override
+   public boolean matches(RelOptRuleCall call) {
+   LogicalTableScan scan = call.rel(1);
+   TableSourceTable tableSourceTable = 
scan.getTable().unwrap(TableSourceTable.class);
+   if (tableSourceTable == null || 
!(tableSourceTable.tableSource() instanceof SupportsProjectionPushDown)) {
+   return false;
+   }
+   SupportsProjectionPushDown pushDownSource = 
(SupportsProjectionPushDown) tableSourceTable.tableSource();
+   if (pushDownSource.supportsNestedProjection()) {
+   throw new TableException("Nested projection push down 
is unsupported now. \n" +
+   "Please disable nested projection 
(SupportsProjectionPushDown#supportsNestedProjection returns false), " +
+   "planner will push down the top-level 
columns.");
+   } else {
+   return true;
+   }
+   }
+
+   @Override
+   public void onMatch(RelOptRuleCall call) {
+   LogicalProject project = call.rel(0);
+   LogicalTableScan scan = call.rel(1);
+
+   int[] usedFields = 
RexNodeExtractor.extractRefInputFields(project.getProjects());
+   // if no fields can be projected, we keep the original plan.
+   if (scan.getRowType().getFieldCount() == usedFields.length) {
+   return;
+   }
+
+   TableSourceTable oldTableSourceTable = 
scan.getTable().unwrap(TableSourceTable.class);
+   DynamicTableSource newTableSource = 
oldTableSourceTable.tableSource().copy();
+   SupportsProjectionPushDown newProjectPushDownSource = 
(SupportsProjectionPushDown) newTableSource;
+
+   int[][] projectedFields = new int[usedFields.length][];
+   List fieldNames = new ArrayList<>();
+   for (int i = 0; i < usedFields.length; ++i) {
+   int usedField = usedFields[i];
+   projectedFields[i] = new int[] { 

[GitHub] [flink] wuchong commented on pull request #12150: [FLINK-17026][kafka] Introduce a new Kafka connect or with new proper…

2020-05-16 Thread GitBox


wuchong commented on pull request #12150:
URL: https://github.com/apache/flink/pull/12150#issuecomment-629734991


   Unfortunately, the following case is failed.
   
   [ERROR]   Kafka011TableITCase>KafkaTableTestBase.testKafkaSourceSink:145 » 
Execution org...
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KurtYoung commented on a change in pull request #12073: [FLINK-17735][streaming] Add specialized collecting iterator

2020-05-16 Thread GitBox


KurtYoung commented on a change in pull request #12073:
URL: https://github.com/apache/flink/pull/12073#discussion_r426208132



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectResultFetcher.java
##
@@ -0,0 +1,343 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators.collect;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.JobExecutionResult;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.accumulators.SerializedListAccumulator;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import 
org.apache.flink.api.common.typeutils.base.array.BytePrimitiveArraySerializer;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.jobgraph.OperatorID;
+import 
org.apache.flink.runtime.operators.coordination.CoordinationRequestGateway;
+import org.apache.flink.util.Preconditions;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * A fetcher which fetches query results from sink and provides exactly-once 
semantics.
+ */
+public class CollectResultFetcher {
+
+   private static final int DEFAULT_RETRY_MILLIS = 100;
+   private static final long DEFAULT_ACCUMULATOR_GET_MILLIS = 1;
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(CollectResultFetcher.class);
+
+   private final CompletableFuture operatorIdFuture;
+   private final String accumulatorName;
+   private final int retryMillis;
+
+   private ResultBuffer buffer;
+
+   private JobClient jobClient;
+   private boolean terminated;

Review comment:
   rename to `jobTerminated`

##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectResultFetcher.java
##
@@ -0,0 +1,343 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators.collect;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.JobExecutionResult;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.accumulators.SerializedListAccumulator;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import 
org.apache.flink.api.common.typeutils.base.array.BytePrimitiveArraySerializer;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.jobgraph.OperatorID;
+import 
org.apache.flink.runtime.operators.coordination.CoordinationRequestGateway;
+import org.apache.flink.util.Preconditions;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import 

[GitHub] [flink] flinkbot edited a comment on pull request #12188: [FLINK-17728] [sql-client] sql client supports parser statements via sql parser

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12188:
URL: https://github.com/apache/flink/pull/12188#issuecomment-629595773


   
   ## CI report:
   
   * 7111bcfcc655d8d6b0ee5f9ff66e80e732f7272e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1504)
 
   * 615cd355c672d2275c8daa85a9cf4d68ae465211 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12184: [FLINK-17027] Introduce a new Elasticsearch 7 connector with new property keys

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12184:
URL: https://github.com/apache/flink/pull/12184#issuecomment-629425634


   
   ## CI report:
   
   * 80bbdd72c2aeb3c802deef71436882603590d147 UNKNOWN
   * 25a6ad18a9f957a5f2081e9e9282aa967343987b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1573)
 
   * cd36d3abe76854bdabb47fd498e51786fee69239 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1576)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out

2020-05-16 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109283#comment-17109283
 ] 

Dian Fu commented on FLINK-17730:
-

another instance: 
https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1551/logs/70

> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart
>  times out
> 
>
> Key: FLINK-17730
> URL: https://issues.apache.org/jira/browse/FLINK-17730
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, FileSystems, Tests
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8
> After 5 minutes 
> {code}
> 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000]
> 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE
> 2020-05-15T06:56:38.1689028Z  at 
> java.net.SocketInputStream.socketRead0(Native Method)
> 2020-05-15T06:56:38.1689496Z  at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 2020-05-15T06:56:38.1689921Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> 2020-05-15T06:56:38.1690316Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> 2020-05-15T06:56:38.1690723Z  at 
> sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
> 2020-05-15T06:56:38.1691196Z  at 
> sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
> 2020-05-15T06:56:38.1691608Z  at 
> sun.security.ssl.InputRecord.read(InputRecord.java:532)
> 2020-05-15T06:56:38.1692023Z  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
> 2020-05-15T06:56:38.1692558Z  - locked <0xb94644f8> (a 
> java.lang.Object)
> 2020-05-15T06:56:38.1692946Z  at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
> 2020-05-15T06:56:38.1693371Z  at 
> sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> 2020-05-15T06:56:38.1694151Z  - locked <0xb9464d20> (a 
> sun.security.ssl.AppInputStream)
> 2020-05-15T06:56:38.1694908Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> 2020-05-15T06:56:38.1695475Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> 2020-05-15T06:56:38.1696007Z  at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> 2020-05-15T06:56:38.1696509Z  at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> 2020-05-15T06:56:38.1696993Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1697466Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1698069Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1698567Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699041Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699624Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1700090Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1700584Z  at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> 2020-05-15T06:56:38.1701282Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1701800Z  at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> 2020-05-15T06:56:38.1702328Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1702804Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445)
> 2020-05-15T06:56:38.1703270Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown 
> Source)
> 2020-05-15T06:56:38.1703677Z  at 
> org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> 2020-05-15T06:56:38.1704090Z  at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> 2020-05-15T06:56:38.1704607Z  at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1991724700.execute(Unknown Source)
> 2020-05-15T06:56:38.1705115Z  at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
> 2020-05-15T06:56:38.1705551Z  at 
> 

[GitHub] [flink] wuchong commented on a change in pull request #12150: [FLINK-17026][kafka] Introduce a new Kafka connect or with new proper…

2020-05-16 Thread GitBox


wuchong commented on a change in pull request #12150:
URL: https://github.com/apache/flink/pull/12150#discussion_r426210338



##
File path: 
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaOptions.java
##
@@ -0,0 +1,374 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.kafka.table;
+
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.streaming.connectors.kafka.config.StartupMode;
+import 
org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition;
+import 
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner;
+import 
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.util.InstantiationUtil;
+
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Properties;
+import java.util.Set;
+
+/** Option utils for Kafka table source sink. */
+public class KafkaOptions {
+   private KafkaOptions() {}
+
+   // 

+   // Kafka specific options
+   // 

+
+   public static final ConfigOption TOPIC = ConfigOptions
+   .key("topic")
+   .stringType()
+   .noDefaultValue()
+   .withDescription("Required topic name from which the 
table is read");
+
+   public static final ConfigOption PROPS_BOOTSTRAP_SERVERS = 
ConfigOptions
+   .key("properties.bootstrap.servers")
+   .stringType()
+   .noDefaultValue()
+   .withDescription("Required Kafka server connection 
string");
+
+   public static final ConfigOption PROPS_GROUP_ID = ConfigOptions
+   .key("properties.group.id")
+   .stringType()
+   .noDefaultValue()
+   .withDescription("Required consumer group in Kafka 
consumer, no need for Kafka producer");
+
+   public static final ConfigOption PROPS_ZK_CONNECT = 
ConfigOptions
+   .key("properties.zookeeper.connect")
+   .stringType()
+   .noDefaultValue()
+   .withDescription("Optional ZooKeeper connection 
string");
+
+   // 

+   // Scan specific options
+   // 

+
+   public static final ConfigOption SCAN_STARTUP_MODE = 
ConfigOptions
+   .key("scan.startup-mode")
+   .stringType()
+   .defaultValue("group-offsets")
+   .withDescription("Optional startup mode for Kafka 
consumer, valid enumerations are "
+   + "\"earliest-offset\", 
\"latest-offset\", \"group-offsets\"\n"
+   + "or \"specific-offsets\"");
+
+   public static final ConfigOption SCAN_STARTUP_SPECIFIC_OFFSETS 
= ConfigOptions
+   .key("scan.startup.specific-offsets")
+   .stringType()
+   .noDefaultValue()
+   .withDescription("Optional offsets used in case of 
\"specific-offsets\" startup mode");
+
+   public static final ConfigOption SCAN_STARTUP_TIMESTAMP_MILLIS = 
ConfigOptions
+   .key("scan.startup.timestamp-millis")
+   .longType()
+   .noDefaultValue();
+
+   // 

[jira] [Commented] (FLINK-17768) UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel is instable

2020-05-16 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109281#comment-17109281
 ] 

Dian Fu commented on FLINK-17768:
-

cc [~AHeise]

> UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel
>  is instable
> -
>
> Key: FLINK-17768
> URL: https://issues.apache.org/jira/browse/FLINK-17768
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel
>  and shouldPerformUnalignedCheckpointOnParallelRemoteChannel failed in azure:
> {code}
> 2020-05-16T12:41:32.3546620Z [ERROR] 
> shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)
>   Time elapsed: 18.865 s  <<< ERROR!
> 2020-05-16T12:41:32.3548739Z java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-16T12:41:32.3550177Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2020-05-16T12:41:32.3551416Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-05-16T12:41:32.3552959Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1665)
> 2020-05-16T12:41:32.3554979Z  at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
> 2020-05-16T12:41:32.3556584Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1645)
> 2020-05-16T12:41:32.3558068Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1627)
> 2020-05-16T12:41:32.3559431Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:158)
> 2020-05-16T12:41:32.3560954Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(UnalignedCheckpointITCase.java:145)
> 2020-05-16T12:41:32.3562203Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-16T12:41:32.3563433Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-16T12:41:32.3564846Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-16T12:41:32.3565894Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-16T12:41:32.3566870Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-16T12:41:32.3568064Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-16T12:41:32.3569727Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-16T12:41:32.3570818Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-16T12:41:32.3571840Z  at 
> org.junit.rules.Verifier$1.evaluate(Verifier.java:35)
> 2020-05-16T12:41:32.3572771Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-05-16T12:41:32.3574008Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-05-16T12:41:32.3575406Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-05-16T12:41:32.3576476Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-05-16T12:41:32.3577253Z  at java.lang.Thread.run(Thread.java:748)
> 2020-05-16T12:41:32.3578228Z Caused by: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-16T12:41:32.3579520Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-05-16T12:41:32.3580935Z  at 
> org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186)
> 2020-05-16T12:41:32.3582361Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2020-05-16T12:41:32.3583456Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2020-05-16T12:41:32.3584816Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2020-05-16T12:41:32.3585874Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2020-05-16T12:41:32.3587059Z  at 
> 

[jira] [Updated] (FLINK-17768) UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel is instable

2020-05-16 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17768:

Fix Version/s: 1.11.0

> UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel
>  is instable
> -
>
> Key: FLINK-17768
> URL: https://issues.apache.org/jira/browse/FLINK-17768
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel
>  and shouldPerformUnalignedCheckpointOnParallelRemoteChannel failed in azure:
> {code}
> 2020-05-16T12:41:32.3546620Z [ERROR] 
> shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)
>   Time elapsed: 18.865 s  <<< ERROR!
> 2020-05-16T12:41:32.3548739Z java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-16T12:41:32.3550177Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2020-05-16T12:41:32.3551416Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-05-16T12:41:32.3552959Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1665)
> 2020-05-16T12:41:32.3554979Z  at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
> 2020-05-16T12:41:32.3556584Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1645)
> 2020-05-16T12:41:32.3558068Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1627)
> 2020-05-16T12:41:32.3559431Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:158)
> 2020-05-16T12:41:32.3560954Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(UnalignedCheckpointITCase.java:145)
> 2020-05-16T12:41:32.3562203Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-16T12:41:32.3563433Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-16T12:41:32.3564846Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-16T12:41:32.3565894Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-16T12:41:32.3566870Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-16T12:41:32.3568064Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-16T12:41:32.3569727Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-16T12:41:32.3570818Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-16T12:41:32.3571840Z  at 
> org.junit.rules.Verifier$1.evaluate(Verifier.java:35)
> 2020-05-16T12:41:32.3572771Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-05-16T12:41:32.3574008Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-05-16T12:41:32.3575406Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-05-16T12:41:32.3576476Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-05-16T12:41:32.3577253Z  at java.lang.Thread.run(Thread.java:748)
> 2020-05-16T12:41:32.3578228Z Caused by: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-16T12:41:32.3579520Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-05-16T12:41:32.3580935Z  at 
> org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186)
> 2020-05-16T12:41:32.3582361Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2020-05-16T12:41:32.3583456Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2020-05-16T12:41:32.3584816Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2020-05-16T12:41:32.3585874Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2020-05-16T12:41:32.3587059Z  at 
> 

[jira] [Updated] (FLINK-17768) UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel is instable

2020-05-16 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17768:

Affects Version/s: 1.11.0

> UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel
>  is instable
> -
>
> Key: FLINK-17768
> URL: https://issues.apache.org/jira/browse/FLINK-17768
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel
>  and shouldPerformUnalignedCheckpointOnParallelRemoteChannel failed in azure:
> {code}
> 2020-05-16T12:41:32.3546620Z [ERROR] 
> shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)
>   Time elapsed: 18.865 s  <<< ERROR!
> 2020-05-16T12:41:32.3548739Z java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-16T12:41:32.3550177Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2020-05-16T12:41:32.3551416Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-05-16T12:41:32.3552959Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1665)
> 2020-05-16T12:41:32.3554979Z  at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
> 2020-05-16T12:41:32.3556584Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1645)
> 2020-05-16T12:41:32.3558068Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1627)
> 2020-05-16T12:41:32.3559431Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:158)
> 2020-05-16T12:41:32.3560954Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(UnalignedCheckpointITCase.java:145)
> 2020-05-16T12:41:32.3562203Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-16T12:41:32.3563433Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-16T12:41:32.3564846Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-16T12:41:32.3565894Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-16T12:41:32.3566870Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-16T12:41:32.3568064Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-16T12:41:32.3569727Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-16T12:41:32.3570818Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-16T12:41:32.3571840Z  at 
> org.junit.rules.Verifier$1.evaluate(Verifier.java:35)
> 2020-05-16T12:41:32.3572771Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-05-16T12:41:32.3574008Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-05-16T12:41:32.3575406Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-05-16T12:41:32.3576476Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-05-16T12:41:32.3577253Z  at java.lang.Thread.run(Thread.java:748)
> 2020-05-16T12:41:32.3578228Z Caused by: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-16T12:41:32.3579520Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-05-16T12:41:32.3580935Z  at 
> org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186)
> 2020-05-16T12:41:32.3582361Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2020-05-16T12:41:32.3583456Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2020-05-16T12:41:32.3584816Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2020-05-16T12:41:32.3585874Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2020-05-16T12:41:32.3587059Z  at 
> 

[jira] [Updated] (FLINK-17768) UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel is instable

2020-05-16 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17768:

Labels: test-stability  (was: )

> UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel
>  is instable
> -
>
> Key: FLINK-17768
> URL: https://issues.apache.org/jira/browse/FLINK-17768
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel
>  and shouldPerformUnalignedCheckpointOnParallelRemoteChannel failed in azure:
> {code}
> 2020-05-16T12:41:32.3546620Z [ERROR] 
> shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)
>   Time elapsed: 18.865 s  <<< ERROR!
> 2020-05-16T12:41:32.3548739Z java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-16T12:41:32.3550177Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2020-05-16T12:41:32.3551416Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-05-16T12:41:32.3552959Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1665)
> 2020-05-16T12:41:32.3554979Z  at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
> 2020-05-16T12:41:32.3556584Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1645)
> 2020-05-16T12:41:32.3558068Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1627)
> 2020-05-16T12:41:32.3559431Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:158)
> 2020-05-16T12:41:32.3560954Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(UnalignedCheckpointITCase.java:145)
> 2020-05-16T12:41:32.3562203Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-16T12:41:32.3563433Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-16T12:41:32.3564846Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-16T12:41:32.3565894Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-16T12:41:32.3566870Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-16T12:41:32.3568064Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-16T12:41:32.3569727Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-16T12:41:32.3570818Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-16T12:41:32.3571840Z  at 
> org.junit.rules.Verifier$1.evaluate(Verifier.java:35)
> 2020-05-16T12:41:32.3572771Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-05-16T12:41:32.3574008Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-05-16T12:41:32.3575406Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-05-16T12:41:32.3576476Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-05-16T12:41:32.3577253Z  at java.lang.Thread.run(Thread.java:748)
> 2020-05-16T12:41:32.3578228Z Caused by: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-16T12:41:32.3579520Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-05-16T12:41:32.3580935Z  at 
> org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186)
> 2020-05-16T12:41:32.3582361Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2020-05-16T12:41:32.3583456Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2020-05-16T12:41:32.3584816Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2020-05-16T12:41:32.3585874Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2020-05-16T12:41:32.3587059Z  at 
> 

[jira] [Created] (FLINK-17768) UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel is instable

2020-05-16 Thread Dian Fu (Jira)
Dian Fu created FLINK-17768:
---

 Summary: 
UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel
 is instable
 Key: FLINK-17768
 URL: https://issues.apache.org/jira/browse/FLINK-17768
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Reporter: Dian Fu


UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel
 and shouldPerformUnalignedCheckpointOnParallelRemoteChannel failed in azure:
{code}
2020-05-16T12:41:32.3546620Z [ERROR] 
shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)
  Time elapsed: 18.865 s  <<< ERROR!
2020-05-16T12:41:32.3548739Z java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
2020-05-16T12:41:32.3550177Zat 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
2020-05-16T12:41:32.3551416Zat 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
2020-05-16T12:41:32.3552959Zat 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1665)
2020-05-16T12:41:32.3554979Zat 
org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
2020-05-16T12:41:32.3556584Zat 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1645)
2020-05-16T12:41:32.3558068Zat 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1627)
2020-05-16T12:41:32.3559431Zat 
org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:158)
2020-05-16T12:41:32.3560954Zat 
org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(UnalignedCheckpointITCase.java:145)
2020-05-16T12:41:32.3562203Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-05-16T12:41:32.3563433Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-05-16T12:41:32.3564846Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-16T12:41:32.3565894Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-05-16T12:41:32.3566870Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-05-16T12:41:32.3568064Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-05-16T12:41:32.3569727Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-05-16T12:41:32.3570818Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-05-16T12:41:32.3571840Zat 
org.junit.rules.Verifier$1.evaluate(Verifier.java:35)
2020-05-16T12:41:32.3572771Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-05-16T12:41:32.3574008Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
2020-05-16T12:41:32.3575406Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
2020-05-16T12:41:32.3576476Zat 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
2020-05-16T12:41:32.3577253Zat java.lang.Thread.run(Thread.java:748)
2020-05-16T12:41:32.3578228Z Caused by: 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
2020-05-16T12:41:32.3579520Zat 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
2020-05-16T12:41:32.3580935Zat 
org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186)
2020-05-16T12:41:32.3582361Zat 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
2020-05-16T12:41:32.3583456Zat 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
2020-05-16T12:41:32.3584816Zat 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
2020-05-16T12:41:32.3585874Zat 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
2020-05-16T12:41:32.3587059Zat 
org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
2020-05-16T12:41:32.3588572Zat 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
2020-05-16T12:41:32.3589733Zat 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
2020-05-16T12:41:32.3590860Zat 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)

[jira] [Created] (FLINK-17767) Tumble and Hop window support offset

2020-05-16 Thread hailong wang (Jira)
hailong wang created FLINK-17767:


 Summary: Tumble and Hop window support offset
 Key: FLINK-17767
 URL: https://issues.apache.org/jira/browse/FLINK-17767
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.10.0
Reporter: hailong wang
 Fix For: 1.11.0


TUMBLE window and HOP window with alignment is not supported yet. We can 
support by 

(, , )



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #12184: [FLINK-17027] Introduce a new Elasticsearch 7 connector with new property keys

2020-05-16 Thread GitBox


wuchong commented on a change in pull request #12184:
URL: https://github.com/apache/flink/pull/12184#discussion_r426209541



##
File path: 
flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/table/ElasticsearchOptions.java
##
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.elasticsearch.table;
+
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.MemorySize;
+import org.apache.flink.configuration.description.Description;
+import 
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase;
+
+import java.time.Duration;
+import java.util.List;
+
+import static org.apache.flink.configuration.description.TextElement.text;
+
+/**
+ * Options for {@link 
org.apache.flink.table.factories.DynamicTableSinkFactory} for Elasticsearch.
+ */
+public class ElasticsearchOptions {
+   /**
+* Backoff strategy. Extends {@link 
ElasticsearchSinkBase.FlushBackoffType} with
+* {@code DISABLED} option.
+*/
+   public enum BackOffType {
+   DISABLED,
+   CONSTANT,
+   EXPONENTIAL
+   }
+
+   public static final ConfigOption> HOSTS_OPTION =
+   ConfigOptions.key("hosts")
+   .stringType()
+   .asList()
+   .noDefaultValue()
+   .withDescription("Elasticseatch hosts to connect to.");
+   public static final ConfigOption INDEX_OPTION =
+   ConfigOptions.key("index")
+   .stringType()
+   .noDefaultValue()
+   .withDescription("Elasticsearch index for every 
record.");
+   public static final ConfigOption DOCUMENT_TYPE_OPTION =
+   ConfigOptions.key("document-type")
+   .stringType()
+   .noDefaultValue()
+   .withDescription("Elasticsearch document type.");
+   public static final ConfigOption KEY_DELIMITER_OPTION =
+   ConfigOptions.key("document-id.key-delimiter")
+   .stringType()
+   .defaultValue("_")
+   .withDescription("Delimiter for composite keys e.g., 
\"$\" would result in IDs \"KEY1$KEY2$KEY3\".");
+   public static final ConfigOption FAILURE_HANDLER_OPTION =
+   ConfigOptions.key("failure-handler")
+   .stringType()
+   .defaultValue("fail")
+   .withDescription(Description.builder()
+   .text("Failure handling strategy in case a 
request to Elasticsearch fails")
+   .list(
+   text("\"fail\" (throws an exception if 
a request fails and thus causes a job failure),"),
+   text("\"ignore\" (ignores failures and 
drops the request),"),
+   text("\"retry_rejected\" (re-adds 
requests that have failed due to queue capacity saturation),"),
+   text("\"class name\" for failure 
handling with a ActionRequestFailureHandler subclass"))
+   .build());
+   public static final ConfigOption FLUSH_ON_CHECKPOINT_OPTION =
+   ConfigOptions.key("sink.flush-on-checkpoint")
+   .booleanType()
+   .defaultValue(true)
+   .withDescription("Disables flushing on checkpoint");
+   public static final ConfigOption BULK_FLUSH_MAX_ACTIONS_OPTION 
=
+   ConfigOptions.key("sink.bulk-flush.max-actions")
+   .intType()
+   .noDefaultValue()
+   .withDescription("Maximum number of actions to buffer 
for each bulk request.");
+   public static final ConfigOption BULK_FLASH_MAX_SIZE_OPTION 
=
+   ConfigOptions.key("sink.bulk-flush.max-size")
+   .memoryType()
+   

[jira] [Assigned] (FLINK-17764) Update tips about the default planner when the planner parameter value is not recognized

2020-05-16 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-17764:
---

Assignee: xushiwei

> Update tips about the default planner when the planner parameter value is not 
> recognized
> 
>
> Key: FLINK-17764
> URL: https://issues.apache.org/jira/browse/FLINK-17764
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: xushiwei
>Assignee: xushiwei
>Priority: Minor
>
> This default planner has been set to blink in the code.
> However, when the planner parameter value is not recognized, the default 
> planner is prompted to be flink. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17764) Update tips about the default planner when the planner parameter value is not recognized

2020-05-16 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109277#comment-17109277
 ] 

Dian Fu commented on FLINK-17764:
-

[~xushiwei] Thanks for the contribution. Have assigned the issue to you. :)

> Update tips about the default planner when the planner parameter value is not 
> recognized
> 
>
> Key: FLINK-17764
> URL: https://issues.apache.org/jira/browse/FLINK-17764
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: xushiwei
>Assignee: xushiwei
>Priority: Minor
>
> This default planner has been set to blink in the code.
> However, when the planner parameter value is not recognized, the default 
> planner is prompted to be flink. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17448) Implement ALTER TABLE for Hive dialect

2020-05-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-17448.

Resolution: Implemented

master: 2bbf8ed32a7fc29dd5f0c941f451bd3a7a6d0d1b

> Implement ALTER TABLE for Hive dialect
> --
>
> Key: FLINK-17448
> URL: https://issues.apache.org/jira/browse/FLINK-17448
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Table SQL / API
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Will cover ALTER table in this ticket



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17448) Implement ALTER TABLE for Hive dialect

2020-05-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-17448:


Assignee: Rui Li

> Implement ALTER TABLE for Hive dialect
> --
>
> Key: FLINK-17448
> URL: https://issues.apache.org/jira/browse/FLINK-17448
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Table SQL / API
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Will cover ALTER table in this ticket



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #12108: [FLINK-17448][sql-parser][table-api-java][table-planner-blink][hive] Implement table DDLs for Hive dialect part2

2020-05-16 Thread GitBox


JingsongLi merged pull request #12108:
URL: https://github.com/apache/flink/pull/12108


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 commented on pull request #11722: [FLINK-5763][state backends] Make savepoints self-contained and relocatable

2020-05-16 Thread GitBox


klion26 commented on pull request #11722:
URL: https://github.com/apache/flink/pull/11722#issuecomment-629731842


   @StephanEwen thanks a lot for the follow-up pr and merging. I learned a lot 
from the follow-up pr that can be used in future contributions.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #12190: [FLINK-17757] Implement format factory for Avro serialization and deseriazation schema of RowData type

2020-05-16 Thread GitBox


wuchong commented on pull request #12190:
URL: https://github.com/apache/flink/pull/12190#issuecomment-629731816


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KurtYoung commented on pull request #12145: [FLINK-17428] [table-planner-blink] supports projection push down on new table source interface in blink planner

2020-05-16 Thread GitBox


KurtYoung commented on pull request #12145:
URL: https://github.com/apache/flink/pull/12145#issuecomment-629730714


   still has some checkstyle error



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KurtYoung commented on a change in pull request #12029: [FLINK-17451][sql-parser][table-planner-blink][hive] Implement view D…

2020-05-16 Thread GitBox


KurtYoung commented on a change in pull request #12029:
URL: https://github.com/apache/flink/pull/12029#discussion_r426207410



##
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/AlterTablePropertiesOperation.java
##
@@ -26,27 +27,28 @@
 
 
 /**
- * Operation to describe a ALTER TABLE .. SET .. statement.
+ * Operation to describe a ALTER TABLE/VIEW .. SET .. statement.
  */
 public class AlterTablePropertiesOperation extends AlterTableOperation {

Review comment:
   Create a dedicated `AlterViewPropertiesOperation` and extends from 
`AlterViewOperation`

##
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ddl/AlterViewAsOperation.java
##
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.operations.ddl;
+
+import org.apache.flink.table.catalog.CatalogView;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+
+/**
+ * Operation to describe an ALTER VIEW ... AS ... statement.
+ */
+public class AlterViewAsOperation extends AlterTableOperation {

Review comment:
   create a base class `AlterViewOperation` and extends from that, 
`AlterViewOperation` should be the same level of `AlterTableOperation `

##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/operations/SqlToOperationConverter.java
##
@@ -227,86 +234,134 @@ private Operation convertDropTable(SqlDropTable 
sqlDropTable) {
return new DropTableOperation(identifier, 
sqlDropTable.getIfExists(), sqlDropTable.isTemporary());
}
 
+   /**
+* convert ALTER VIEW statement.
+*/
+   private Operation convertAlterView(SqlAlterView alterView) {
+   UnresolvedIdentifier unresolvedIdentifier = 
UnresolvedIdentifier.of(alterView.fullViewName());
+   ObjectIdentifier viewIdentifier = 
catalogManager.qualifyIdentifier(unresolvedIdentifier);
+   Optional optionalCatalogTable 
= catalogManager.getTable(viewIdentifier);
+   if (!optionalCatalogTable.isPresent() || 
optionalCatalogTable.get().isTemporary()) {
+   throw new ValidationException(String.format("View %s 
doesn't exist or is a temporary view.",
+   viewIdentifier.toString()));
+   }
+   CatalogBaseTable baseTable = 
optionalCatalogTable.get().getTable();
+   if (baseTable instanceof CatalogTable) {
+   throw new ValidationException("ALTER VIEW for a table 
is not allowed");
+   }
+   if (alterView instanceof SqlAlterViewRename) {
+   UnresolvedIdentifier newUnresolvedIdentifier =
+   
UnresolvedIdentifier.of(((SqlAlterViewRename) alterView).fullNewViewName());
+   ObjectIdentifier newTableIdentifier = 
catalogManager.qualifyIdentifier(newUnresolvedIdentifier);
+   return new AlterTableRenameOperation(viewIdentifier, 
newTableIdentifier);

Review comment:
   create a new `AlterViewRenameOperation`

##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/operations/SqlToOperationConverter.java
##
@@ -227,86 +234,134 @@ private Operation convertDropTable(SqlDropTable 
sqlDropTable) {
return new DropTableOperation(identifier, 
sqlDropTable.getIfExists(), sqlDropTable.isTemporary());
}
 
+   /**
+* convert ALTER VIEW statement.
+*/
+   private Operation convertAlterView(SqlAlterView alterView) {
+   UnresolvedIdentifier unresolvedIdentifier = 
UnresolvedIdentifier.of(alterView.fullViewName());
+   ObjectIdentifier viewIdentifier = 
catalogManager.qualifyIdentifier(unresolvedIdentifier);
+   Optional optionalCatalogTable 
= catalogManager.getTable(viewIdentifier);
+   if (!optionalCatalogTable.isPresent() || 
optionalCatalogTable.get().isTemporary()) {
+   throw new ValidationException(String.format("View %s 

[GitHub] [flink] TsReaper commented on pull request #12073: [FLINK-17735][table] Add specialized collecting iterator to Blink planner

2020-05-16 Thread GitBox


TsReaper commented on pull request #12073:
URL: https://github.com/apache/flink/pull/12073#issuecomment-629730294


   > Is the description here correct? The classes do not seem to relate to the 
Blink planner, but are all in the `flink-streaming-java` module.
   > 
   > I also don't understand how this is related specifically to SQL - isn't 
this a client side functionality, meaning it would be run as part of the 
JobClient (and used by the SQL Shell)?
   
   We initially would like to implement an iterator which will spill large data 
onto disks on client side. Currently only `ResettableExternalBuffer` in Blink 
planner can achieve this conveniently. But after an offline discussion with 
@KurtYoung yesterday, we decided to first simplify the implementation to a 
memory-only version. So this iterator is not related to Blink planner now. I'll 
update the description.
   
   The iterator is not running in `JobClient` but actually uses `JobClient`. It 
is indeed runs in the client side and used by the users.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] yangjf2019 commented on pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on pull request #268:
URL: https://github.com/apache/flink-web/pull/268#issuecomment-629729435


   Hi,@klion26 Thank you for your help, I have completed all the changes, 
please take a look, thank you!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426206770



##
File path: contributing/contribute-code.zh.md
##
@@ -136,91 +136,91 @@ Apache Flink is maintained, improved, and extended by 
code contributions of volu
 
 
 
-### 1. Create Jira Ticket and Reach Consensus
+### 1. 创建 Jira 工单并达成共识。
 
 
-The first step for making a contribution to Apache Flink is to reach consensus 
with the Flink community. This means agreeing on the scope and implementation 
approach of a change.
+向 Apache Flink 做出贡献的第一步是与 Flink 社区达成共识,这意味着需要一起商定更改的范围和实现的方法。
 
-In most cases, the discussion should happen in [Flink's bug tracker: 
Jira](https://issues.apache.org/jira/projects/FLINK/summary).
+在大多数情况下,我们应该在 [Flink 的 Bug 
追踪器:Jira](https://issues.apache.org/jira/projects/FLINK/summary) 中进行讨论。
 
-The following types of changes require a `[DISCUSS]` thread on the 
dev@flink.a.o Flink mailing list:
+以下类型的更改需要向 Flink 的 d...@flink.apache.org 邮件列表发一封以 `[DISCUSS]` 开头的邮件:
 
- - big changes (major new feature; big refactorings, involving multiple 
components)
- - potentially controversial changes or issues
- - changes with very unclear approaches or multiple equal approaches
+ - 重大变化(主要新功能、大重构和涉及多个组件)
+ - 可能存在争议的改动或问题
+ - 采用非常不明确的方法或有多种实现方法
 
- Do not open a Jira ticket for these types of changes before the discussion 
has come to a conclusion.
- Jira tickets based on a dev@ discussion need to link to that discussion and 
should summarize the outcome.
+ 在讨论未达成一致之前,不要为这些类型的更改打开 Jira 工单。
+ 基于 dev 邮件讨论的 Jira 工单需要链接到该讨论,并总结结果。
 
 
 
-**Requirements for a Jira ticket to get consensus:**
+**Jira 工单获得共识的要求:**
 
-  - Formal requirements
- - The *Title* describes the problem concisely.
- - The *Description* gives all the details needed to understand the 
problem or feature request.
- - The *Component* field is set: Many committers and contributors only 
focus on certain subsystems of Flink. Setting the appropriate component is 
important for getting their attention.
-  - There is **agreement** that the ticket solves a valid problem, and that it 
is a **good fit** for Flink.
-The Flink community considers the following aspects:
- - Does the contribution alter the behavior of features or components in a 
way that it may break previous users’ programs and setups? If yes, there needs 
to be a discussion and agreement that this change is desirable.
- - Does the contribution conceptually fit well into Flink? Is it too much 
of a special case such that it makes things more complicated for the common 
case, or bloats the abstractions / APIs?
- - Does the feature fit well into Flink’s architecture? Will it scale and 
keep Flink flexible for the future, or will the feature restrict Flink in the 
future?
- - Is the feature a significant new addition (rather than an improvement 
to an existing part)? If yes, will the Flink community commit to maintaining 
this feature?
- - Does this feature align well with Flink's roadmap and currently ongoing 
efforts?
- - Does the feature produce added value for Flink users or developers? Or 
does it introduce the risk of regression without adding relevant user or 
developer benefit?
- - Could the contribution live in another repository, e.g., Apache Bahir 
or another external repository?
- - Is this a contribution just for the sake of getting a commit in an open 
source project (fixing typos, style changes merely for taste reasons)
-  - There is **consensus** on how to solve the problem. This includes 
considerations such as
-- API and data backwards compatibility and migration strategies
-- Testing strategies
-- Impact on Flink's build time
-- Dependencies and their licenses
+  - 正式要求
+ - 描述问题的 *Title* 要简明扼要。
+ - 在 *Description* 中要提供了解问题或功能请求所需的所有详细信息。
+ - 要设置 *Component* 字段:许多 committers 和贡献者,只专注于 Flink 
的某些子系统。设置适当的组件标签对于引起他们的注意很重要。
+  - 社区*一致同意*使用工单是有效解决问题的方法,而且这**非常适合** Flink。 
+Flink 社区考虑了以下几个方面:
+ - 这种贡献是否会改变特性或组件的性能,从而破坏以前的用户程序和设置?如果是,那么就需要讨论并达成一致意见,证明这种改变是可取的。
+ - 这个贡献在概念上是否适合 Flink ?这是否是一种特殊场景?支持这种场景后会导致通用的场景变得更复杂,还是使整理抽象或者 APIs 
变得更臃肿?
+ - 该功能是否适合 Flink 的架构?它是否易扩展并保持 Flink 未来的灵活性,或者该功能将来会限制 Flink 吗?
+ - 该特性是一个重要的新增内容(而不是对现有内容的改进)吗?如果是,Flink 社区会承诺维护这个特性吗?
+ - 这个特性是否与 Flink 的路线图以及当前正在进行的工作内容一致?
+ - 该特性是否为 Flink 用户或开发人员带来了附加价值?或者它引入了回归的风险而没有给相关的用户或开发人员带来好处?
+ - 该贡献是否存在于第三方的库中,例如 Apache Bahir 或者第三方的库?
+ - 这仅仅是为了在开源项目中获得提交而做出的贡献吗(仅仅是为了获得贡献而贡献,才去修复拼写错误、改变代码风格)?
+  - 在如何解决这个问题上已有**共识**,包括以下需要考虑的因素
+- API、数据向后兼容性和迁移策略
+- 测试策略
+- 对 Flink 构建时间的影响
+- 依赖关系及其许可证
 
-If a change is identified as a large or controversial change in the discussion 
on Jira, it might require a [Flink Improvement Proposal 
(FLIP)](https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals)
 or a discussion on the [dev mailing list]( {{ site.base 
}}/community.html#mailing-lists) to reach agreement and consensus.
+如果在 

[GitHub] [flink-web] yangjf2019 commented on a change in pull request #268: [FLINK-13343][docs-zh] Translate "Contribute Code" page into Chinese

2020-05-16 Thread GitBox


yangjf2019 commented on a change in pull request #268:
URL: https://github.com/apache/flink-web/pull/268#discussion_r426206487



##
File path: contributing/contribute-code.zh.md
##
@@ -2,17 +2,17 @@
 title:  "贡献代码"
 ---
 
-Apache Flink is maintained, improved, and extended by code contributions of 
volunteers. We welcome contributions to Flink, but due to the size of the 
project and to preserve the high quality of the code base, we follow a 
contribution process that is explained in this document.
+Apache Flink 是由志愿者贡献的代码来维护、改进和扩展的。我们欢迎给 Flink 
做贡献,但由于项目的规模大,以及为了保持高质量的代码库,本文将阐述我们所遵循的贡献流程。

Review comment:
   感谢指导





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12029: [FLINK-17451][sql-parser][table-planner-blink][hive] Implement view D…

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12029:
URL: https://github.com/apache/flink/pull/12029#issuecomment-625673739


   
   ## CI report:
   
   * 9405ea4470dc022ffb514f603396fc6bb2582835 UNKNOWN
   * c1ad4b5d93e10b76fad14269f46fa62e4d771bed Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1554)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11920: [FLINK-17408] Introduce GPUDriver and discovery script

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #11920:
URL: https://github.com/apache/flink/pull/11920#issuecomment-619909940


   
   ## CI report:
   
   * 64ac1d1699c6fb6b941a6232d562840a73a107e3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1561)
 
   * c8c6a5e4330862e9f54cd598b76e1adb2e11bb54 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1580)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on pull request #11920: [FLINK-17408] Introduce GPUDriver and discovery script

2020-05-16 Thread GitBox


KarmaGYZ commented on pull request #11920:
URL: https://github.com/apache/flink/pull/11920#issuecomment-629725321


   Thanks for the review @tillrohrmann . I've updated the PR and verified the 
latest version on a machine that has GPU support.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11920: [FLINK-17408] Introduce GPUDriver and discovery script

2020-05-16 Thread GitBox


KarmaGYZ commented on a change in pull request #11920:
URL: https://github.com/apache/flink/pull/11920#discussion_r426203278



##
File path: flink-dist/src/main/assemblies/opt.xml
##
@@ -75,6 +75,28 @@
0644

 
+   
+   
+   
../flink-external-resources/flink-external-resource-gpu/target/flink-external-resource-gpu-${project.version}.jar
+   
opt/external-resource-gpu/
+   
flink-external-resource-gpu-${project.version}.jar
+   0644
+   
+
+   
+   
../flink-external-resources/flink-external-resource-gpu/src/main/resources/gpu-discovery-common.sh
+   
opt/external-resource-gpu/
+   gpu-discovery-common.sh
+   0755
+   
+
+   
+   
../flink-external-resources/flink-external-resource-gpu/src/main/resources/nvidia-gpu-discovery.sh
+   
opt/external-resource-gpu/
+   nvidia-gpu-discovery.sh
+   0755
+   

Review comment:
   Do you worried about the total size of flink-dist? Since this feature is 
only 22 kb (while most of the metric reporter plugin are over 100 kb), I think 
it would not affect the time of downloading flink-dist. I also notice that 
there is a discussion about offering a slim package and a fat package. If I 
understand it correctly, it would be good to just include it in the fat package.
   Regarding the usability, I think it would be easier to move the 
`external-resource-gpu` from 'opt' to 'plugin' than searching and downloading 
it from website. WDYT?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11920: [FLINK-17408] Introduce GPUDriver and discovery script

2020-05-16 Thread GitBox


KarmaGYZ commented on a change in pull request #11920:
URL: https://github.com/apache/flink/pull/11920#discussion_r426203278



##
File path: flink-dist/src/main/assemblies/opt.xml
##
@@ -75,6 +75,28 @@
0644

 
+   
+   
+   
../flink-external-resources/flink-external-resource-gpu/target/flink-external-resource-gpu-${project.version}.jar
+   
opt/external-resource-gpu/
+   
flink-external-resource-gpu-${project.version}.jar
+   0644
+   
+
+   
+   
../flink-external-resources/flink-external-resource-gpu/src/main/resources/gpu-discovery-common.sh
+   
opt/external-resource-gpu/
+   gpu-discovery-common.sh
+   0755
+   
+
+   
+   
../flink-external-resources/flink-external-resource-gpu/src/main/resources/nvidia-gpu-discovery.sh
+   
opt/external-resource-gpu/
+   nvidia-gpu-discovery.sh
+   0755
+   

Review comment:
   Do you worried about the total size of flink-dist? Since this feature is 
only 22 kb (while most of the metric reporter plugin are over 100 kb), I think 
it would not affect the time of downloading flink-dist. I also notice that 
there is a discussion about offering slim jar and fat jar. If I understand it 
correctly, it would be good to just include it to the fat jar.
   Regarding to the usability, I think it would be easier to move the 
`external-resource-gpu` from 'opt' to 'plugin' than searching and downloading 
it from website. WDYT?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11920: [FLINK-17408] Introduce GPUDriver and discovery script

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #11920:
URL: https://github.com/apache/flink/pull/11920#issuecomment-619909940


   
   ## CI report:
   
   * a06b6bc5d3ff8d57e4610da4fe3ae1ec1f1d0b01 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1399)
 
   * 64ac1d1699c6fb6b941a6232d562840a73a107e3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1561)
 
   * c8c6a5e4330862e9f54cd598b76e1adb2e11bb54 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1580)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11920: [FLINK-17408] Introduce GPUDriver and discovery script

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #11920:
URL: https://github.com/apache/flink/pull/11920#issuecomment-619909940


   
   ## CI report:
   
   * a06b6bc5d3ff8d57e4610da4fe3ae1ec1f1d0b01 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1399)
 
   * 64ac1d1699c6fb6b941a6232d562840a73a107e3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1561)
 
   * c8c6a5e4330862e9f54cd598b76e1adb2e11bb54 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11854: [FLINK-17407] Introduce external resource framework

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #11854:
URL: https://github.com/apache/flink/pull/11854#issuecomment-617586491


   
   ## CI report:
   
   * bddb0e274da11bbe99d15c6e0bb55e8d8c0e658a UNKNOWN
   * dc7a9c5c7d1fac82518815b9277809dfb82ddaac UNKNOWN
   * 2238559b0e2245e77204e7c7d0ef34c7a97e3766 UNKNOWN
   * 8be6c46114192d31061079e547fc125c08b916b1 UNKNOWN
   * 7cffcdf13e6e77e475bd3c2aa7b0e4edea910cd5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1557)
 
   * d255903bbe5da86d53547ad6a3d5ddf02e63b913 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1564)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12172: [FLINK-17656][tests] Migrate docker e2e tests to flink-docker

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12172:
URL: https://github.com/apache/flink/pull/12172#issuecomment-629230668


   
   ## CI report:
   
   * 59d7735635b83251b862eeedef8af62662dd0919 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1540)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12186: [FLINK-16383][task] Do not relay notifyCheckpointComplete to closed operators

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12186:
URL: https://github.com/apache/flink/pull/12186#issuecomment-629492236


   
   ## CI report:
   
   * 17deb1ace51d274715027adbeb607feb3958347a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1577)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] senegalo edited a comment on pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-05-16 Thread GitBox


senegalo edited a comment on pull request #12056:
URL: https://github.com/apache/flink/pull/12056#issuecomment-629626611


   @aljoscha  & @austince 
   So i pushed some new changes and let me explain my endeavors for the last 3 
hours !
   
   ### My Goal
   * combine body / correlation id parsing one go
   * conform with the changes made by #12093 
   * having a 1 to N relation between an AMQP Delivery and the parsed records 
that would be passed to the collector.
   
   ### Why i failed to implement the suggestion
   
   I did exactly what you described above: 
   * i passed the collector to the interface `RMQDeserializationSchema` 
   * When processMessage is called it would extract the record(s) and the 
correlation id then call the collector newly implemented method`collect(OUT 
records, String correlationID)` 
   * The collector would then stash the correlationId in a private var for me 
to use it in the `synchronized` block. 
   
   The problem is that "or at least to my understanding" if you call `collect` 
on the collector the data is already out of the source operator. 
   This would constitute a problem if the `autoAck` is false because we need to 
decide if we are going to add the record(s) to the collector or not based on if 
we've seen this ID before.
   Since we were doing both in one go it was impossible without a lot of 
hacking around in the code.
   
   ### Alternative solution pushed
   
   *  `RMQDeserializationSchema` would in one go deserialize both the record(s) 
and the correlation ID then return an instance of `RMQDeserializedMessage` 
which is just a wrapper class around those values.
   * the `RMQSource` would call it's `parseMessage` method that would decide 
which method to use to deserialize the message either the old way or using the 
`RMQDeserializationSchema` then return for either one of those an instance of 
`RMQDeserializedMessage`. 
   * If `autoAck` is false then the synchronized block could easily access the 
`correlationID` from the `RMQDeserializedMessage` using the `getCorrelationID` 
metohd.
   * The collector collects the record(s) from the 
`RMQDeserializedMessage#getMessages` which returns a `List`
   * Finally the `RMQCollector` now has a `collect(List records)` where i 
just iterate over the records produced by the single AMQP delivery and call the 
normal `collect(OUT record)`
   
   Hope that solution makes sense.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-7267) Add support for lists of hosts to connect

2020-05-16 Thread Stephan Ewen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-7267.
---

> Add support for lists of hosts to connect
> -
>
> Key: FLINK-7267
> URL: https://issues.apache.org/jira/browse/FLINK-7267
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.3.0
>Reporter: Hu Hailin
>Assignee: Austin Cawley-Edwards
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> The RMQConnectionConfig can assign one host:port only. I want to connect to a 
> cluster with an available node.
> My workaround is write my own sink extending RMQSink and override open(), 
> assigning the nodes list in it.
> {code:java}
>   connection = factory.newConnection(addrs)
> {code}
> I still need to build the RMQConnectionConfig with a dummy host:port or a 
> node in list. It's annoying.
> I think it is better to provide a configuration for it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-7267) Add support for lists of hosts to connect

2020-05-16 Thread Stephan Ewen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-7267.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

Implemented in 1.11.0 via
  - cce715b78ca85b8eee258f32b1e6fb366ca56998
  - 6fa85fea3c3dc1414c1aa4147744c82b3f4fede0


> Add support for lists of hosts to connect
> -
>
> Key: FLINK-7267
> URL: https://issues.apache.org/jira/browse/FLINK-7267
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.3.0
>Reporter: Hu Hailin
>Assignee: Austin Cawley-Edwards
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> The RMQConnectionConfig can assign one host:port only. I want to connect to a 
> cluster with an available node.
> My workaround is write my own sink extending RMQSink and override open(), 
> assigning the nodes list in it.
> {code:java}
>   connection = factory.newConnection(addrs)
> {code}
> I still need to build the RMQConnectionConfig with a dummy host:port or a 
> node in list. It's annoying.
> I think it is better to provide a configuration for it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17582) Update quickstarts to use universal Kafka connector

2020-05-16 Thread Stephan Ewen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-17582.


> Update quickstarts to use universal Kafka connector
> ---
>
> Key: FLINK-17582
> URL: https://issues.apache.org/jira/browse/FLINK-17582
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-7267) Add support for lists of hosts to connect

2020-05-16 Thread Stephan Ewen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen reassigned FLINK-7267:
---

Assignee: Austin Cawley-Edwards

> Add support for lists of hosts to connect
> -
>
> Key: FLINK-7267
> URL: https://issues.apache.org/jira/browse/FLINK-7267
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.3.0
>Reporter: Hu Hailin
>Assignee: Austin Cawley-Edwards
>Priority: Minor
>  Labels: pull-request-available
>
> The RMQConnectionConfig can assign one host:port only. I want to connect to a 
> cluster with an available node.
> My workaround is write my own sink extending RMQSink and override open(), 
> assigning the nodes list in it.
> {code:java}
>   connection = factory.newConnection(addrs)
> {code}
> I still need to build the RMQConnectionConfig with a dummy host:port or a 
> node in list. It's annoying.
> I think it is better to provide a configuration for it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-17582) Update quickstarts to use universal Kafka connector

2020-05-16 Thread Stephan Ewen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-17582.
--
Resolution: Fixed

Fixed in 1.11.0 via
  - 812d3d13d8c7966d62f40b990470ec468a16c80e

> Update quickstarts to use universal Kafka connector
> ---
>
> Key: FLINK-17582
> URL: https://issues.apache.org/jira/browse/FLINK-17582
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] asfgit closed pull request #12191: [hotfix][flink-core and docs] fix typos

2020-05-16 Thread GitBox


asfgit closed pull request #12191:
URL: https://github.com/apache/flink/pull/12191


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] asfgit closed pull request #12064: [hotfix][javadocs]java doc error fix

2020-05-16 Thread GitBox


asfgit closed pull request #12064:
URL: https://github.com/apache/flink/pull/12064


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] asfgit closed pull request #12044: [FLINK-17582][quickstarts] Update quickstarts to use universal Kafka connector

2020-05-16 Thread GitBox


asfgit closed pull request #12044:
URL: https://github.com/apache/flink/pull/12044


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] asfgit closed pull request #12185: [FLINK-7267][connectors/rabbitmq] Allow overriding RMQ connection setup

2020-05-16 Thread GitBox


asfgit closed pull request #12185:
URL: https://github.com/apache/flink/pull/12185


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] asfgit closed pull request #12066: [hotfix][runtime] Remove useless local variable in CompletedCheckpointStoreTest3testAddCheckpointMoreThanMaxRetained

2020-05-16 Thread GitBox


asfgit closed pull request #12066:
URL: https://github.com/apache/flink/pull/12066


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12186: [FLINK-16383][task] Do not relay notifyCheckpointComplete to closed operators

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12186:
URL: https://github.com/apache/flink/pull/12186#issuecomment-629492236


   
   ## CI report:
   
   * 25f5493bb635b3876ef629cdfb87b3d7a6aa8fff Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1520)
 
   * 17deb1ace51d274715027adbeb607feb3958347a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1577)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12169: [FLINK-17495][metrics][prometheus]Add custom labels on PrometheusReporter like PrometheusPushGatewayReporter's groupingKey

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12169:
URL: https://github.com/apache/flink/pull/12169#issuecomment-629195529


   
   ## CI report:
   
   * 5d47c27ad3e586414f7f4ad2623acd477b7a6644 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1532)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12056:
URL: https://github.com/apache/flink/pull/12056#issuecomment-626167900


   
   ## CI report:
   
   * 850f402a0e81fa0c097a0a883f56e2aa597b8f55 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1534)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] StephanEwen commented on pull request #12122: [FLINK-15102] Add continuousSource() method to StreamExecutionEnvironment.

2020-05-16 Thread GitBox


StephanEwen commented on pull request #12122:
URL: https://github.com/apache/flink/pull/12122#issuecomment-629702088


   The update looks fine to me.
   
   It would be nice to address the comment about the `CoordinatedSourceITCase` 
(more thorough result checking), otherwise +1 to merge this 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12186: [FLINK-16383][task] Do not relay notifyCheckpointComplete to closed operators

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12186:
URL: https://github.com/apache/flink/pull/12186#issuecomment-629492236


   
   ## CI report:
   
   * 25f5493bb635b3876ef629cdfb87b3d7a6aa8fff Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1520)
 
   * 17deb1ace51d274715027adbeb607feb3958347a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12184: [FLINK-17027] Introduce a new Elasticsearch 7 connector with new property keys

2020-05-16 Thread GitBox


flinkbot edited a comment on pull request #12184:
URL: https://github.com/apache/flink/pull/12184#issuecomment-629425634


   
   ## CI report:
   
   * 80bbdd72c2aeb3c802deef71436882603590d147 UNKNOWN
   * fba63e0698c8c60a04efb10a8914ea548bacc653 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1536)
 
   * 25a6ad18a9f957a5f2081e9e9282aa967343987b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1573)
 
   * cd36d3abe76854bdabb47fd498e51786fee69239 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1576)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   >