[GitHub] [flink] flinkbot edited a comment on pull request #12003: [FLINK-10934] Support application mode for kubernetes

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #12003:
URL: https://github.com/apache/flink/pull/12003#issuecomment-624416714


   
   ## CI report:
   
   * 068f558f4deeb9ddbb4cb0ea8013bbe099e912cd Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=658)
 
   * e4018f5fcffebdab1398266c65de191385554859 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12004: [FLINK-17434][core][hive] Hive partitioned source support streaming read

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #12004:
URL: https://github.com/apache/flink/pull/12004#issuecomment-624445192


   
   ## CI report:
   
   * dfa5dc1b96f0e1a13b9c31b4fab97cf58d580299 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=666)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17496) Performance regression with amazon-kinesis-producer 0.13.1 in Flink 1.10.x

2020-05-05 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li closed FLINK-17496.
-
Fix Version/s: 1.11.0
   1.10.1
   Resolution: Fixed

Thanks for logging and fixing this issue [~thw]!

Merged (by Thomas) into
master via a737cdcbdb972913e2e31946f7a2bb9175945a29
release-1.10 via 99ea1aac8338fa7d5c947f72943e5a3dfc0c0dbe

> Performance regression with amazon-kinesis-producer 0.13.1 in Flink 1.10.x
> --
>
> Key: FLINK-17496
> URL: https://issues.apache.org/jira/browse/FLINK-17496
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.10.0
> Environment: The KPL upgrade in 1.10.0 has introduced a performance 
> issue, which can be addressed by reverting to 0.12.9 or forward fix with 
> 0.14.0. 
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17540) Adding tests/tools checking the pattern of new configuration options following the xyz.max/min pattern

2020-05-05 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100467#comment-17100467
 ] 

Yangze Guo commented on FLINK-17540:


Not sure if there is an existing mechanism that could be leveraged. It would be 
appreciated if anyone could give a pointer.

> Adding tests/tools checking the pattern of new configuration options 
> following the xyz.max/min pattern
> --
>
> Key: FLINK-17540
> URL: https://issues.apache.org/jira/browse/FLINK-17540
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: Yangze Guo
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17540) Adding tests/tools checking the pattern of new configuration options following the xyz.max/min pattern

2020-05-05 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-17540:
--

 Summary: Adding tests/tools checking the pattern of new 
configuration options following the xyz.max/min pattern
 Key: FLINK-17540
 URL: https://issues.apache.org/jira/browse/FLINK-17540
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Configuration
Reporter: Yangze Guo






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17535) Treat min/max as part of the hierarchy of config option

2020-05-05 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo updated FLINK-17535:
---
Component/s: Runtime / Configuration

> Treat min/max as part of the hierarchy of config option
> ---
>
> Key: FLINK-17535
> URL: https://issues.apache.org/jira/browse/FLINK-17535
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Yangze Guo
>Priority: Major
>
> As discussed in 
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Should-max-min-be-part-of-the-hierarchy-of-config-option-td40578.html.
>  We decide to treat min/max as part of the hierarchy of config option. This 
> ticket is an umbrella of all tasks related to it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17166) Modify the log4j-console.properties to also output logs into the files for WebUI

2020-05-05 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100466#comment-17100466
 ] 

Yang Wang commented on FLINK-17166:
---

[~trohrmann] I am not sure about what you have said in the ML that there are 
solutions how to redirect stdout and stderr into separate files using tee 
without duplication [1]. If you mean substreams, then we still need the grep to 
filter out the log4j pattern just like following.
{code:java}
program 2>&1 | tee >(grep --line-buffered -v -E "$LOG4J_LAYOUT_PATTERN" 
>"$out"){code}
 

In my opinion, we could not easily separate the {{system.out}} and log4j 
{{ConsoleAppender}} from stdout. Since they have the same {{FileDescriptor}}.

  

[1]. [http://www.softpanorama.org/Tools/tee.shtml]

> Modify the log4j-console.properties to also output logs into the files for 
> WebUI
> 
>
> Key: FLINK-17166
> URL: https://issues.apache.org/jira/browse/FLINK-17166
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: Andrey Zagrebin
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12004: [FLINK-17434][core][hive] Hive partitioned source support streaming read

2020-05-05 Thread GitBox


flinkbot commented on pull request #12004:
URL: https://github.com/apache/flink/pull/12004#issuecomment-624445192


   
   ## CI report:
   
   * dfa5dc1b96f0e1a13b9c31b4fab97cf58d580299 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17539) Migrate the configuration options which do not follow the xyz.max/min pattern.

2020-05-05 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo updated FLINK-17539:
---
Component/s: Runtime / Configuration

> Migrate the configuration options which do not follow the xyz.max/min pattern.
> --
>
> Key: FLINK-17539
> URL: https://issues.apache.org/jira/browse/FLINK-17539
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: Yangze Guo
>Priority: Major
> Fix For: 2.0.0
>
>
> Config options need to be changed:
> - restart-strategy.failure-rate.max-failures-per-interval
> - yarn.maximum-failed-containers
> - state.backend.rocksdb.compaction.level.max-size-level-base
> - cluster.registration.max-timeout
> - high-availability.zookeeper.client.max-retry-attempts
> - rest.client.max-content-length
> - rest.retry.max-attempts
> - rest.server.max-content-length
> - jobstore.max-capacity
> - taskmanager.registration.max-backoff
> - compiler.delimited-informat.max-line-samples
> - compiler.delimited-informat.min-line-samples
> - compiler.delimited-informat.max-sample-len
> - taskmanager.runtime.max-fan
> - pipeline.max-parallelism
> - execution.checkpointing.max-concurrent-checkpoint
> - execution.checkpointing.min-pause
> - akka.client-socket-worker-pool.pool-size-max
> - akka.client-socket-worker-pool.pool-size-min
> - akka.fork-join-executor.parallelism-max
> - akka.fork-join-executor.parallelism-min
> - akka.server-socket-worker-pool.pool-size-max
> - akka.server-socket-worker-pool.pool-size-min
> - containerized.heap-cutoff-min



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16991) Support DynamicTableSink in planner

2020-05-05 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16991:
---

Assignee: Jark Wu

> Support DynamicTableSink in planner
> ---
>
> Key: FLINK-16991
> URL: https://issues.apache.org/jira/browse/FLINK-16991
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Timo Walther
>Assignee: Jark Wu
>Priority: Major
>
> Support the {{DynamicTableSink}} interface in planner.
> Utility methods for the data structure converters might not be implemented 
> yet.
> Not all changelog modes might be supported initially. This depends on 
> FLINK-16887.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17539) Migrate the configuration options which do not follow the xyz.max/min pattern.

2020-05-05 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-17539:
--

 Summary: Migrate the configuration options which do not follow the 
xyz.max/min pattern.
 Key: FLINK-17539
 URL: https://issues.apache.org/jira/browse/FLINK-17539
 Project: Flink
  Issue Type: Sub-task
Reporter: Yangze Guo
 Fix For: 2.0.0


Config options need to be changed:
- restart-strategy.failure-rate.max-failures-per-interval
- yarn.maximum-failed-containers
- state.backend.rocksdb.compaction.level.max-size-level-base
- cluster.registration.max-timeout
- high-availability.zookeeper.client.max-retry-attempts
- rest.client.max-content-length
- rest.retry.max-attempts
- rest.server.max-content-length
- jobstore.max-capacity
- taskmanager.registration.max-backoff
- compiler.delimited-informat.max-line-samples
- compiler.delimited-informat.min-line-samples
- compiler.delimited-informat.max-sample-len
- taskmanager.runtime.max-fan
- pipeline.max-parallelism
- execution.checkpointing.max-concurrent-checkpoint
- execution.checkpointing.min-pause
- akka.client-socket-worker-pool.pool-size-max
- akka.client-socket-worker-pool.pool-size-min
- akka.fork-join-executor.parallelism-max
- akka.fork-join-executor.parallelism-min
- akka.server-socket-worker-pool.pool-size-max
- akka.server-socket-worker-pool.pool-size-min
- containerized.heap-cutoff-min



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #11985: [FLINK-16989][table] Support ScanTableSource in blink planner

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11985:
URL: https://github.com/apache/flink/pull/11985#issuecomment-623545781


   
   ## CI report:
   
   * 6dd7e458809c5c43ce9e51f4381af0b84440526d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=660)
 
   * 5cebd993df368d4065154858dae34ea2d6e41727 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=665)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11854: [FLINK-17407] Introduce external resource framework

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11854:
URL: https://github.com/apache/flink/pull/11854#issuecomment-617586491


   
   ## CI report:
   
   * bddb0e274da11bbe99d15c6e0bb55e8d8c0e658a UNKNOWN
   * dc7a9c5c7d1fac82518815b9277809dfb82ddaac UNKNOWN
   * 2238559b0e2245e77204e7c7d0ef34c7a97e3766 UNKNOWN
   * 4d95d7fd1c806c67f751fc1604ead15cf02ff13a Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/164117123) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=661)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on a change in pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows

2020-05-05 Thread GitBox


dianfu commented on a change in pull request #11960:
URL: https://github.com/apache/flink/pull/11960#discussion_r420516301



##
File path: 
flink-python/src/main/java/org/apache/flink/streaming/api/operators/python/AbstractPythonFunctionOperator.java
##
@@ -159,7 +159,14 @@ public void open() throws Exception {
@Override
public void close() throws Exception {
try {
-   invokeFinishBundle();
+   try {

Review comment:
   What's the purpose of this change

##
File path: 
flink-python/src/main/java/org/apache/flink/python/util/ResourceUtil.java
##
@@ -29,10 +29,17 @@
  */
 public class ResourceUtil {
 
-   public static final String PYFLINK_UDF_RUNNER = "pyflink-udf-runner.sh";
+   public static final String PYFLINK_UDF_RUNNER_SH = 
"pyflink-udf-runner.sh";
+   public static final String PYFLINK_UDF_RUNNER_BAT = 
"pyflink-udf-runner.bat";
 
public static File extractUdfRunner(String tmpdir) throws IOException, 
InterruptedException {
-   File file = new File(tmpdir, PYFLINK_UDF_RUNNER);
+   File file;
+   // This program can not depend any other dependencies, so we 
check the operating system without any utils.

Review comment:
   Why this program cannot depend on other dependencies? 
   `OperatingSystem.isWindows()` cannot also be used?

##
File path: 
flink-python/src/main/java/org/apache/flink/python/env/ProcessPythonEnvironmentManager.java
##
@@ -127,20 +130,36 @@ public void open() throws Exception {
}
 
@Override
-   public void close() {
-   FileUtils.deleteDirectoryQuietly(new File(baseDirectory));
-   if (shutdownHook != null) {
-   ShutdownHookUtil.removeShutdownHook(
-   shutdownHook, 
ProcessPythonEnvironmentManager.class.getSimpleName(), LOG);
-   shutdownHook = null;
+   public void close() throws Exception {
+   try {
+   int i = 0;
+   while (i < CHECK_TIMEOUT / CHECK_INTERVAL) {
+   try {
+   i++;
+   FileUtils.deleteDirectory(new 
File(baseDirectory));
+   } catch (Throwable t) {
+   if (i == CHECK_TIMEOUT / 
CHECK_INTERVAL) {
+   LOG.error("Clean the temporary 
directory of Python UDF worker failed.", t);
+   break;
+   }
+   }
+   Thread.sleep(CHECK_INTERVAL);
+   }
+   } finally {
+   if (shutdownHook != null) {
+   ShutdownHookUtil.removeShutdownHook(
+   shutdownHook, 
ProcessPythonEnvironmentManager.class.getSimpleName(), LOG);
+   shutdownHook = null;
+   }
+   LOG.info("Python environment manager is closing. Now 
print the content of boot log:\n" + getBootLog());

Review comment:
   why we need to print the log from boot log?

##
File path: 
flink-python/src/main/java/org/apache/flink/python/env/ProcessPythonEnvironmentManager.java
##
@@ -127,20 +130,36 @@ public void open() throws Exception {
}
 
@Override
-   public void close() {
-   FileUtils.deleteDirectoryQuietly(new File(baseDirectory));
-   if (shutdownHook != null) {
-   ShutdownHookUtil.removeShutdownHook(
-   shutdownHook, 
ProcessPythonEnvironmentManager.class.getSimpleName(), LOG);
-   shutdownHook = null;
+   public void close() throws Exception {
+   try {
+   int i = 0;
+   while (i < CHECK_TIMEOUT / CHECK_INTERVAL) {
+   try {
+   i++;
+   FileUtils.deleteDirectory(new 
File(baseDirectory));
+   } catch (Throwable t) {
+   if (i == CHECK_TIMEOUT / 
CHECK_INTERVAL) {
+   LOG.error("Clean the temporary 
directory of Python UDF worker failed.", t);
+   break;
+   }
+   }
+   Thread.sleep(CHECK_INTERVAL);
+   }
+   } finally {
+   if (shutdownHook != null) {
+   ShutdownHookUtil.removeShutdownHook(
+   shutdownHook, 

[GitHub] [flink] flinkbot commented on pull request #12004: [FLINK-17434][core][hive] Hive partitioned source support streaming read

2020-05-05 Thread GitBox


flinkbot commented on pull request #12004:
URL: https://github.com/apache/flink/pull/12004#issuecomment-624441821


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit dfa5dc1b96f0e1a13b9c31b4fab97cf58d580299 (Wed May 06 
04:58:13 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17434) Hive partitioned source support streaming read

2020-05-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17434:
---
Labels: pull-request-available  (was: )

> Hive partitioned source support streaming read
> --
>
> Key: FLINK-17434
> URL: https://issues.apache.org/jira/browse/FLINK-17434
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi opened a new pull request #12004: [FLINK-17434][core][hive] Hive partitioned source support streaming read

2020-05-05 Thread GitBox


JingsongLi opened a new pull request #12004:
URL: https://github.com/apache/flink/pull/12004


   
   ## What is the purpose of the change
   
   Implement a hive streaming source, it monitor partitions of hive meta store. 
Streaming reading.
   
   ## Brief change log
   
   - Refactor ContinuousFileReaderOperator to read generic split to reuse this 
operator.
   - HiveTableInputFormat implements CheckpointableInputFormat for streaming 
reading
   - Support streaming read for hive partitioned source
   
   ## Verifying this change
   
   Manually testing.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented?JavaDocs
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11985: [FLINK-16989][table] Support ScanTableSource in blink planner

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11985:
URL: https://github.com/apache/flink/pull/11985#issuecomment-623545781


   
   ## CI report:
   
   * 6dd7e458809c5c43ce9e51f4381af0b84440526d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=660)
 
   * 5cebd993df368d4065154858dae34ea2d6e41727 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17531) Add a new checkpoint Guage metric: elapsedTimeSinceLastCompletedCheckpoint

2020-05-05 Thread Steven Zhen Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Zhen Wu updated FLINK-17531:
---
Description: 
like to discuss the value of a new checkpoint Guage metric: 
`elapsedTimeSinceLastCompletedCheckpoint`. Main motivation is for alerting. I 
know reasons below are somewhat related to our setup. Hence want to explore the 
interest of the community.

 

*What do we want to achieve?*

We want to alert if no successful checkpoint happened for a specific period. 
With this new metric, we can set up a simple alerting rule like `alert if 
elapsedTimeSinceLastCompletedCheckpoint > N minutes`. It is a good alerting 
pattern of `time since last success`. We found 
`elapsedTimeSinceLastCompletedCheckpoint` very intuitive to set up alert 
against.

 

*What out existing checkpoint metrics?*

`numberOfCompletedCheckpoints`. We can set up an alert like `alert if 
numberOfCompletedCheckpoints = 0 for N minutes`. However, it is an anti-pattern 
for our alerting system, as it is looking for lack of good signal (vs explicit 
bad signal). Such an anti-pattern is easier to suffer false alarm problem when 
there is occasional metric drop or alerting system processing issue.

 

`numberOfFailedCheckpoints`. That is an explicit failure signal, which is good. 
We can set up alert like `alert if numberOfFailedCheckpoints > 0 in X out Y 
minutes`. We have some high-parallelism large-state jobs. Their normal 
checkpoint duration is <1-2 minutes. However, when recovering from an outage 
with large backlog, sometimes subtasks from one or a few containers experienced 
super high back pressure. It took checkpoint barrier sometimes more than an 
hour to travel through the DAG to those heavy back pressured subtasks. Causes 
of the back pressure are likely due to multi-tenancy environment and 
performance variation among containers. Instead of letting checkpoint to time 
out in this case, we decided to increase checkpoint timeout value to crazy long 
value (like 2 hours). With that, we kind of missed the explicit "bad" signal of 
failed/timed out checkpoint.

 

In theory, one could argue that we can set checkpoint timeout to infinity. It 
is always better to have a long but completed checkpoint than a timed out 
checkpoint, as timed out checkpoint basically give up its positions in the 
queue and new checkpoint just reset the positions back to the end of the queue 
. Note that we are using at least checkpoint semantics. So there is no barrier 
alignment concern. FLIP-76 (unaligned checkpoints) can help checkpoint dealing 
with back pressure better. It is not ready now and also has its limitations. 
That is a separate discussion.

  was:
like to discuss the value of a new checkpoint Guage metric: 
`elapsedTimeSinceLastCompletedCheckpoint`. Main motivation is for alerting. I 
know reasons below are somewhat related to our setup. Hence want to explore the 
interest of the community.

 

*What do we want to achieve?*

We want to alert if no successful checkpoint happened for a specific period. 
With this new metric, we can set up a simple alerting rule like `alert if 
elapsedTimeSinceLastCompletedCheckpoint > N minutes`. It is a good alerting 
pattern of `time since last success`. We found 
`elapsedTimeSinceLastCompletedCheckpoint` very intuitive to set up alert 
against.

 

*What out existing checkpoint metrics?*

`numberOfCompletedCheckpoints`. We can set up an alert like `alert if 
numberOfCompletedCheckpoints = 0 for N minutes`. However, it is an anti-pattern 
for our alerting system, as it is looking for lack of good signal (vs explicit 
bad signal). Such an anti-pattern is easier to suffer false alarm problem when 
there is occasional metric drop or alerting system processing issue.

 

`numberOfFailedCheckpoints`. That is an explicit failure signal, which is good. 
We can set up alert like `alert if numberOfFailedCheckpoints > 0 in X out Y 
minutes`. We have some high-parallelism large-state jobs. Their normal 
checkpoint duration is <1-2 minutes. However, when recovering from an outage 
with large backlog, sometimes subtasks from one or a few containers experienced 
super high back pressure. It took checkpoint barrier sometimes more than an 
hour to travel through the DAG to those heavy back pressured subtasks. Causes 
of the back pressure are likely due to multi-tenancy environment and 
performance variation among containers. Instead of letting checkpoint to time 
out in this case, we decided to increase checkpoint timeout value to crazy long 
value (like 2 hours). With that, we kind of missed the explicit "bad" signal of 
failed/timed out checkpoint.

 

In theory, one could argue that we can set checkpoint timeout to infinity. It 
is always better to have a long but completed checkpoint than a timed out 
checkpoint, as timed out checkpoint basically give up its positions in the 
queue and new checkpoint just reset 

[jira] [Updated] (FLINK-17531) Add a new checkpoint Guage metric: elapsedTimeSinceLastCompletedCheckpoint

2020-05-05 Thread Steven Zhen Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Zhen Wu updated FLINK-17531:
---
Description: 
like to discuss the value of a new checkpoint Guage metric: 
`elapsedTimeSinceLastCompletedCheckpoint`. Main motivation is for alerting. I 
know reasons below are somewhat related to our setup. Hence want to explore the 
interest of the community.

 

*What do we want to achieve?*

We want to alert if no successful checkpoint happened for a specific period. 
With this new metric, we can set up a simple alerting rule like `alert if 
elapsedTimeSinceLastCompletedCheckpoint > N minutes`. It is a good alerting 
pattern of `time since last success`. We found 
`elapsedTimeSinceLastCompletedCheckpoint` very intuitive to set up alert 
against.

 

*What out existing checkpoint metrics?*

`numberOfCompletedCheckpoints`. We can set up an alert like `alert if 
numberOfCompletedCheckpoints = 0 for N minutes`. However, it is an anti-pattern 
for our alerting system, as it is looking for lack of good signal (vs explicit 
bad signal). Such an anti-pattern is easier to suffer false alarm problem when 
there is occasional metric drop or alerting system processing issue.

 

`numberOfFailedCheckpoints`. That is an explicit failure signal, which is good. 
We can set up alert like `alert if numberOfFailedCheckpoints > 0 in X out Y 
minutes`. We have some high-parallelism large-state jobs. Their normal 
checkpoint duration is <1-2 minutes. However, when recovering from an outage 
with large backlog, sometimes subtasks from one or a few containers experienced 
super high back pressure. It took checkpoint barrier sometimes more than an 
hour to travel through the DAG to those heavy back pressured subtasks. Causes 
of the back pressure are likely due to multi-tenancy environment and 
performance variation among containers. Instead of letting checkpoint to time 
out in this case, we decided to increase checkpoint timeout value to crazy long 
value (like 2 hours). With that, we kind of missed the explicit "bad" signal of 
failed/timed out checkpoint.

 

In theory, one could argue that we can set checkpoint timeout to infinity. It 
is always better to have a long but completed checkpoint than a timed out 
checkpoint, as timed out checkpoint basically give up its positions in the 
queue and new checkpoint just reset the positions back to the end of the queue 
. Note that we are using at least checkpoint semantics. So there is no barrier 
alignment concern. FLIP-76 (unaligned checkpoints) can help checkpoint dealing 
with back pressure better. It is not ready now and also has its limitations. 

  was:
like to discuss the value of a new checkpoint Guage metric: 
`elapsedSecondsSinceLastCompletedCheckpoint`. Main motivation is for alerting. 
I know reasons below are somewhat related to our setup. Hence want to explore 
the interest of the community.

 

*What do we want to achieve?*

We want to alert if no successful checkpoint happened for a specific period. 
With this new metric, we can set up a simple alerting rule like `alert if 
elapsedSecondsSinceLastCompletedCheckpoint > N minutes`. It is a good alerting 
pattern of `time since last success`. We found 
`elapsedSecondsSinceLastCompletedCheckpoint` very intuitive to set up alert 
against.

 

*What out existing checkpoint metrics?*

`numberOfCompletedCheckpoints`. We can set up an alert like `alert if 
numberOfCompletedCheckpoints = 0 for N minutes`. However, it is an anti-pattern 
for our alerting system, as it is looking for lack of good signal (vs explicit 
bad signal). Such an anti-pattern is easier to suffer false alarm problem when 
there is occasional metric drop or alerting system processing issue.

 

`numberOfFailedCheckpoints`. That is an explicit failure signal, which is good. 
We can set up alert like `alert if numberOfFailedCheckpoints > 0 in X out Y 
minutes`. We have some high-parallelism large-state jobs. Their normal 
checkpoint duration is <1-2 minutes. However, when recovering from an outage 
with large backlog, sometimes subtasks from one or a few containers experienced 
super high back pressure. It took checkpoint barrier sometimes more than an 
hour to travel through the DAG to those heavy back pressured subtasks. Causes 
of the back pressure are likely due to multi-tenancy environment and 
performance variation among containers. Instead of letting checkpoint to time 
out in this case, we decided to increase checkpoint timeout value to crazy long 
value (like 2 hours). With that, we kind of missed the explicit "bad" signal of 
failed/timed out checkpoint.

 

In theory, one could argue that we can set checkpoint timeout to infinity. It 
is always better to have a long but completed checkpoint than a timed out 
checkpoint, as timed out checkpoint basically give up its positions in the 
queue and new checkpoint just reset the positions back to 

[GitHub] [flink] zhengcanbin commented on pull request #11996: [hotfix][runtime] Fix code style in ZooKeeperJobGraphStore

2020-05-05 Thread GitBox


zhengcanbin commented on pull request #11996:
URL: https://github.com/apache/flink/pull/11996#issuecomment-624435920


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17537) Refactor flink-jdbc connector structure

2020-05-05 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-17537:
---
Description: 
This issue is ready to refactor the flink-jdbc connector structure.  As the 
discussion in mail list[1], the details are:
  
 1) Use `Jdbc` instead of `JDBC` in the new public API and interface name. The 
Datastream API `JdbcSink` which imported in this version has followed this 
standard. 
  
 2) Move all interface and classes from `org.apache.flink.java.io.jdbc`(old 
package) to `org.apache.flink.connector.jdbc`(new package) to follow the base 
connector path in FLIP-27.
  
 3) Keep ancient JDBCOutputFormat, JDBCInputFormat and ParameterValuesProvider
 will keep in old package
  
 4) Rename `flink-jdbc` to `flink-connector-jdbc`. 
  
 
[http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Refactor-flink-jdbc-connector-structure-td40984.html]

  was:
This issue is ready to refactor the flink-jdbc connector structure.  As the 
discussion in mail list[1], the details are:
 
1) Use `Jdbc` instead of `JDBC` in the new public API and interface name. The 
Datastream API `JdbcSink` which imported in this version has followed this 
standard. 
 
2) Move all interface and classes from `org.apache.flink.java.io.jdbc`(old 
package) to `org.apache.flink.connector.jdbc`(new package) to follow the base 
connector path in FLIP-27.
 
3) Keep ancient JDBCOutputFormat, JDBCInputFormat and ParameterValuesProvider
will keep in old package
 
 
4) Rename `flink-jdbc` to `flink-connector-jdbc`. 
 
[http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Refactor-flink-jdbc-connector-structure-td40984.html]


> Refactor flink-jdbc connector structure
> ---
>
> Key: FLINK-17537
> URL: https://issues.apache.org/jira/browse/FLINK-17537
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> This issue is ready to refactor the flink-jdbc connector structure.  As the 
> discussion in mail list[1], the details are:
>   
>  1) Use `Jdbc` instead of `JDBC` in the new public API and interface name. 
> The Datastream API `JdbcSink` which imported in this version has followed 
> this standard. 
>   
>  2) Move all interface and classes from `org.apache.flink.java.io.jdbc`(old 
> package) to `org.apache.flink.connector.jdbc`(new package) to follow the base 
> connector path in FLIP-27.
>   
>  3) Keep ancient JDBCOutputFormat, JDBCInputFormat and ParameterValuesProvider
>  will keep in old package
>   
>  4) Rename `flink-jdbc` to `flink-connector-jdbc`. 
>   
>  
> [http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Refactor-flink-jdbc-connector-structure-td40984.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17538) Refactor flink-hbase connector structure

2020-05-05 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-17538:
---
Description: 
This issue is ready to refactor the flink-hbase connector structure.  The 
refactor is referenced from flink-jdbc module[1], the initial propose  will 
like:
 
1) Move all interface and classes from `org.apache.flink.addons.hbase`(old 
package) to `org.apache.flink.connector.hbase`(new package) to follow the base 
connector path in FLIP-27.
 
2) Keep ancient TableInputFormat for compatibility, as for the rest classes :

   (1) move HbaseTableSource, HBaseUpsertTableSink and factory from old package 
to new package because TableEnvironment#registerTableSource、 
TableEnvironment#registerTableSink  will be removed in 1.11

  (2)other classes are internal used and can move to new package
 
3) Rename `flink-hbase` to `flink-connector-hbase`. 
 
[http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Refactor-flink-jdbc-connector-structure-td40984.html]

  
was:[http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Refactor-flink-jdbc-connector-structure-td40984.html]


> Refactor flink-hbase connector structure
> 
>
> Key: FLINK-17538
> URL: https://issues.apache.org/jira/browse/FLINK-17538
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> This issue is ready to refactor the flink-hbase connector structure.  The 
> refactor is referenced from flink-jdbc module[1], the initial propose  will 
> like:
>  
> 1) Move all interface and classes from `org.apache.flink.addons.hbase`(old 
> package) to `org.apache.flink.connector.hbase`(new package) to follow the 
> base connector path in FLIP-27.
>  
> 2) Keep ancient TableInputFormat for compatibility, as for the rest classes :
>    (1) move HbaseTableSource, HBaseUpsertTableSink and factory from old 
> package to new package because TableEnvironment#registerTableSource、 
> TableEnvironment#registerTableSink  will be removed in 1.11
>   (2)other classes are internal used and can move to new package
>  
> 3) Rename `flink-hbase` to `flink-connector-hbase`. 
>  
> [http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Refactor-flink-jdbc-connector-structure-td40984.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-web] wuchong commented on pull request #267: [FLINK-13682][docs-zh] Translate "Code Style - Scala Guide" page into Chinese

2020-05-05 Thread GitBox


wuchong commented on pull request #267:
URL: https://github.com/apache/flink-web/pull/267#issuecomment-624433532


   Thanks @klion26 for reviewing this. I can help to merge this once you are 
fine with the changes.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11914: [FLINK-17385][jdbc][postgres] Handled problem of numeric with 0 precision

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11914:
URL: https://github.com/apache/flink/pull/11914#issuecomment-619541487


   
   ## CI report:
   
   * 5246f29f20f57b6805dcac293f9024344edf5160 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/162067327) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=268)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11985: [FLINK-16989][table] Support ScanTableSource in blink planner

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11985:
URL: https://github.com/apache/flink/pull/11985#issuecomment-623545781


   
   ## CI report:
   
   * 6dd7e458809c5c43ce9e51f4381af0b84440526d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=660)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11666: [FLINK-17038][API/DataStream] Decouple resolving Type from creating TypeInformation process

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11666:
URL: https://github.com/apache/flink/pull/11666#issuecomment-610769013


   
   ## CI report:
   
   * d0eaf4322c0a339ae0fe1d152c776bd860d79d52 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/164119025) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=664)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17537) Refactor flink-jdbc connector structure

2020-05-05 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-17537:
---
Description: 
This issue is ready to refactor the flink-jdbc connector structure.  As the 
discussion in mail list[1], the details are:
 
1) Use `Jdbc` instead of `JDBC` in the new public API and interface name. The 
Datastream API `JdbcSink` which imported in this version has followed this 
standard. 
 
2) Move all interface and classes from `org.apache.flink.java.io.jdbc`(old 
package) to `org.apache.flink.connector.jdbc`(new package) to follow the base 
connector path in FLIP-27.
 
3) Keep ancient JDBCOutputFormat, JDBCInputFormat and ParameterValuesProvider
will keep in old package
 
 
4) Rename `flink-jdbc` to `flink-connector-jdbc`. 
 
[http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Refactor-flink-jdbc-connector-structure-td40984.html]

  
was:[http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Refactor-flink-jdbc-connector-structure-td40984.html]


> Refactor flink-jdbc connector structure
> ---
>
> Key: FLINK-17537
> URL: https://issues.apache.org/jira/browse/FLINK-17537
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> This issue is ready to refactor the flink-jdbc connector structure.  As the 
> discussion in mail list[1], the details are:
>  
> 1) Use `Jdbc` instead of `JDBC` in the new public API and interface name. The 
> Datastream API `JdbcSink` which imported in this version has followed this 
> standard. 
>  
> 2) Move all interface and classes from `org.apache.flink.java.io.jdbc`(old 
> package) to `org.apache.flink.connector.jdbc`(new package) to follow the base 
> connector path in FLIP-27.
>  
> 3) Keep ancient JDBCOutputFormat, JDBCInputFormat and ParameterValuesProvider
> will keep in old package
>  
>  
> 4) Rename `flink-jdbc` to `flink-connector-jdbc`. 
>  
> [http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Refactor-flink-jdbc-connector-structure-td40984.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17459) JDBCAppendTableSink not support flush by flushIntervalMills

2020-05-05 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-17459:
---

Assignee: Jark Wu

> JDBCAppendTableSink not  support  flush  by flushIntervalMills
> --
>
> Key: FLINK-17459
> URL: https://issues.apache.org/jira/browse/FLINK-17459
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: ranqiqiang
>Assignee: Jark Wu
>Priority: Major
>
> {{JDBCAppendTableSink just support append by 
> "JDBCAppendTableSinkBuilder#batchSize",}}{{not support like 
> "JDBCUpsertTableSink#flushIntervalMills"}}
>  
> {{If batchSize=5000 ,  my data rows=5000*N+1 ,then last one record could not 
> be append !!}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17459) JDBCAppendTableSink not support flush by flushIntervalMills

2020-05-05 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100442#comment-17100442
 ] 

Jark Wu commented on FLINK-17459:
-

Hi [~michael ran], here is an example:


{code:java}
tableEnv.createTemporaryView("test",streamSource);
JDBCUpsertTableSink sink = JDBCUpsertTableSink.builder()
.setOptions(options)
.setTableSchema(schema)
.setFlushIntervalMills(3000)
.build();
tableEnv.registerTableSink("jdbc_sink", sink);
tableEnv.sqlUpdate("insert into jdbc_sink select  order_id,user_id,status from 
test");
tableEnv.execute()
{code}


> JDBCAppendTableSink not  support  flush  by flushIntervalMills
> --
>
> Key: FLINK-17459
> URL: https://issues.apache.org/jira/browse/FLINK-17459
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: ranqiqiang
>Priority: Major
>
> {{JDBCAppendTableSink just support append by 
> "JDBCAppendTableSinkBuilder#batchSize",}}{{not support like 
> "JDBCUpsertTableSink#flushIntervalMills"}}
>  
> {{If batchSize=5000 ,  my data rows=5000*N+1 ,then last one record could not 
> be append !!}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17459) JDBCAppendTableSink not support flush by flushIntervalMills

2020-05-05 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-17459:
---

Assignee: (was: Jark Wu)

> JDBCAppendTableSink not  support  flush  by flushIntervalMills
> --
>
> Key: FLINK-17459
> URL: https://issues.apache.org/jira/browse/FLINK-17459
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: ranqiqiang
>Priority: Major
>
> {{JDBCAppendTableSink just support append by 
> "JDBCAppendTableSinkBuilder#batchSize",}}{{not support like 
> "JDBCUpsertTableSink#flushIntervalMills"}}
>  
> {{If batchSize=5000 ,  my data rows=5000*N+1 ,then last one record could not 
> be append !!}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11960:
URL: https://github.com/apache/flink/pull/11960#issuecomment-621791651


   
   ## CI report:
   
   * a283855e4c5042bec925a05e15727ab2db71bd1e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=656)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11946: [FLINK-17460][orc][parquet] Create sql-jars for parquet and orc

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11946:
URL: https://github.com/apache/flink/pull/11946#issuecomment-621227437


   
   ## CI report:
   
   * 3330a210cce782d40effb148131061190fcbe216 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/162723935) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=425)
 
   * 93467113a4a07df9db9885ddee9df234c5f7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=662)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11955: [FLINK-17255][python] Add HBase connector descriptor support in PyFlink.

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11955:
URL: https://github.com/apache/flink/pull/11955#issuecomment-621638743


   
   ## CI report:
   
   * a8148a8db6a234ae4f9c51f8e9bc81fee80affe2 UNKNOWN
   * ea9f82a3882bbe023e1e8d3cc534b39e4e8cbd26 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=657)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11914: [FLINK-17385][jdbc][postgres] Handled problem of numeric with 0 precision

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11914:
URL: https://github.com/apache/flink/pull/11914#issuecomment-619541487


   
   ## CI report:
   
   * 5246f29f20f57b6805dcac293f9024344edf5160 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/162067327) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=268)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11854: [FLINK-17407] Introduce external resource framework

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11854:
URL: https://github.com/apache/flink/pull/11854#issuecomment-617586491


   
   ## CI report:
   
   * bddb0e274da11bbe99d15c6e0bb55e8d8c0e658a UNKNOWN
   * dc7a9c5c7d1fac82518815b9277809dfb82ddaac UNKNOWN
   * 2238559b0e2245e77204e7c7d0ef34c7a97e3766 UNKNOWN
   * b4311ee10a3e6df9a129c2b971231e2312b63c37 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163725498) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=563)
 
   * 4d95d7fd1c806c67f751fc1604ead15cf02ff13a Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/164117123) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=661)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11920: [FLINK-17408] Introduce GPUDriver and discovery script

2020-05-05 Thread GitBox


KarmaGYZ commented on a change in pull request #11920:
URL: https://github.com/apache/flink/pull/11920#discussion_r420532489



##
File path: 
flink-external-resource/flink-external-resource-gpu/src/main/java/org/apache/flink/externalresource/gpu/GPUDriver.java
##
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.externalresource.gpu;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.externalresource.ExternalResourceDriver;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.ExternalResourceOptions;
+import org.apache.flink.util.FlinkRuntimeException;
+import org.apache.flink.util.StringUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.InputStreamReader;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.Set;
+
+import static org.apache.flink.configuration.ConfigOptions.key;
+
+/**
+ * Driver takes the responsibility to discover GPU resources and provide the 
GPU resource information.
+ * It get the GPU information by executing user-defined discovery script.
+ */
+public class GPUDriver implements ExternalResourceDriver {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(GPUDriver.class);
+
+   private static final String RESOURCE_NAME = "gpu";
+
+   private static final String DISCOVERY_SCRIPT_PATH_SUFFIX = 
"param.discovery-script.path";
+
+   private static final String DISCOVERY_SCRIPT_ARGS_SUFFIX = 
"param.discovery-script.args";
+
+   @VisibleForTesting
+   static final ConfigOption DISCOVERY_SCRIPT_PATH =
+   
key(ExternalResourceOptions.keyWithResourceNameAndSuffix(RESOURCE_NAME, 
DISCOVERY_SCRIPT_PATH_SUFFIX))
+   .stringType()
+   .noDefaultValue();
+
+   @VisibleForTesting
+   static final ConfigOption DISCOVERY_SCRIPT_ARG =
+   
key(ExternalResourceOptions.keyWithResourceNameAndSuffix(RESOURCE_NAME, 
DISCOVERY_SCRIPT_ARGS_SUFFIX))
+   .stringType()
+   .noDefaultValue();
+
+   @VisibleForTesting
+   static final ConfigOption GPU_AMOUNT =
+   
key(ExternalResourceOptions.keyWithResourceNameAndSuffix(RESOURCE_NAME, 
ExternalResourceOptions.EXTERNAL_RESOURCE_AMOUNT_SUFFIX))
+   .longType()
+   .defaultValue(0L);
+
+   private final Set gpuResources;
+
+   public GPUDriver(Configuration config) throws Exception {
+   final String discoveryScriptPath = 
config.getString(DISCOVERY_SCRIPT_PATH);
+   if (StringUtils.isNullOrWhitespaceOnly(discoveryScriptPath)) {
+   throw new FlinkRuntimeException("Could not find config 
of the path of gpu discovery script.");
+   }
+
+   final File discoveryScript = new 
File(System.getenv().getOrDefault(ConfigConstants.ENV_FLINK_HOME_DIR, ".") +
+   "/" + discoveryScriptPath);
+   if (!discoveryScript.exists()) {
+   throw new FlinkRuntimeException("The gpu discovery 
script does not exist in path " + discoveryScript.getAbsolutePath());
+   }
+
+   final String args = config.getString(DISCOVERY_SCRIPT_ARG);
+   final long gpuAmount = config.getLong(GPU_AMOUNT);
+   gpuResources = new HashSet<>();
+
+   if (gpuAmount <= 0) {
+   LOG.warn("The amount of GPU should be positive.");
+   return;
+   }
+
+   String output = executeDiscoveryScript(discoveryScript, 
gpuAmount, args);
+   if (output != null && output != "") {
+   String[] indexes = output.split(",");
+   for (String index : indexes) {
+   gpuResources.add(new GPUInformation(index));
+   }
+   }
+   

[GitHub] [flink] flinkbot edited a comment on pull request #11666: [FLINK-17038][API/DataStream] Decouple resolving Type from creating TypeInformation process

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11666:
URL: https://github.com/apache/flink/pull/11666#issuecomment-610769013


   
   ## CI report:
   
   * 357aec51985c844acda57e6cb165ea115c9ea903 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/159252575) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7202)
 
   * d0eaf4322c0a339ae0fe1d152c776bd860d79d52 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17472) StreamExecutionEnvironment and ExecutionEnvironment in Yarn mode

2020-05-05 Thread RocMarshal (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RocMarshal updated FLINK-17472:
---
Attachment: demo.jpeg

> StreamExecutionEnvironment and ExecutionEnvironment in Yarn mode
> 
>
> Key: FLINK-17472
> URL: https://issues.apache.org/jira/browse/FLINK-17472
> Project: Flink
>  Issue Type: New Feature
>  Components: Client / Job Submission, Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: RocMarshal
>Priority: Major
> Attachments: demo.jpeg
>
>
> Expect to have such a mode of submission. Build the task directly in the 
> Environment, and then submit the task in yarn mode. Just like 
> RemoteStreamEnvironment, as long as you specify the parameters of the yarn 
> cluster (host, port) or yarn configuration directory and HADOOP_USER_NAME, 
> you can use the topology built by Env to submit the task .
> This submission method is best to minimize the transmission of resources 
> required by yarn to start flink-jobmanager and taskmanagerrunner to ensure 
> that flink can deploy tasks on the yarn cluster as quickly as possible.
> The simple demo as shown in  the external link [Simple outline of the 
> Yarn-Env API(Per-Job 
> Mode)|https://gitee.com/RocMarshal/resources4link/blob/master/README.md] .the 
> parameter named 'env' containes all the operators about job ,like 
> sources,maps,etc..
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17538) Refactor flink-hbase connector structure

2020-05-05 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-17538:
--

 Summary: Refactor flink-hbase connector structure
 Key: FLINK-17538
 URL: https://issues.apache.org/jira/browse/FLINK-17538
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / HBase
Reporter: Leonard Xu
 Fix For: 1.11.0


[http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Refactor-flink-jdbc-connector-structure-td40984.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17537) Refactor flink-jdbc connector structure

2020-05-05 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-17537:
--

 Summary: Refactor flink-jdbc connector structure
 Key: FLINK-17537
 URL: https://issues.apache.org/jira/browse/FLINK-17537
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC
Reporter: Leonard Xu
 Fix For: 1.11.0


[http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Refactor-flink-jdbc-connector-structure-td40984.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17462) Support CSV serialization and deseriazation schema for RowData type

2020-05-05 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-17462.
---
Resolution: Fixed

Implemented in master (1.11.0): b1e436a109c6473b794f02ec0a853d1ae6df6c83

> Support CSV serialization and deseriazation schema for RowData type
> ---
>
> Key: FLINK-17462
> URL: https://issues.apache.org/jira/browse/FLINK-17462
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Add support {{CsvRowDataDeserializationSchema}} and 
> {{CsvRowDataSerializationSchema}} for the new data structure {{RowData}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #11962: [FLINK-17462][format][csv] Support CSV serialization and deseriazation schema for RowData type

2020-05-05 Thread GitBox


wuchong commented on pull request #11962:
URL: https://github.com/apache/flink/pull/11962#issuecomment-624425766


   The build is failed because of 
`UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointMassivelyParallel` 
which is tracked by FLINK-17315.
   
   It is passed in my private build: 
https://dev.azure.com/imjark/Flink/_build/results?buildId=45=results
   
   Will merge this. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17459) JDBCAppendTableSink not support flush by flushIntervalMills

2020-05-05 Thread ranqiqiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100433#comment-17100433
 ] 

ranqiqiang commented on FLINK-17459:


[~jark]  can you give an example by JDBCUpsertTableSink  to finish append ?

> JDBCAppendTableSink not  support  flush  by flushIntervalMills
> --
>
> Key: FLINK-17459
> URL: https://issues.apache.org/jira/browse/FLINK-17459
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: ranqiqiang
>Priority: Major
>
> {{JDBCAppendTableSink just support append by 
> "JDBCAppendTableSinkBuilder#batchSize",}}{{not support like 
> "JDBCUpsertTableSink#flushIntervalMills"}}
>  
> {{If batchSize=5000 ,  my data rows=5000*N+1 ,then last one record could not 
> be append !!}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #11954: [FLINK-17420][table sql / api]Cannot alias Tuple and Row fields when converting DataStream to Table

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11954:
URL: https://github.com/apache/flink/pull/11954#issuecomment-621615631


   
   ## CI report:
   
   * c5e67bf6cfb59353b1109c060b82820920d30ff8 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/164108249) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11985: [FLINK-16989][table] Support ScanTableSource in blink planner

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11985:
URL: https://github.com/apache/flink/pull/11985#issuecomment-623545781


   
   ## CI report:
   
   * 420704211108d661a5d9959e571ce98460ae1897 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=624)
 
   * 6dd7e458809c5c43ce9e51f4381af0b84440526d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=660)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11946: [FLINK-17460][orc][parquet] Create sql-jars for parquet and orc

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11946:
URL: https://github.com/apache/flink/pull/11946#issuecomment-621227437


   
   ## CI report:
   
   * 3330a210cce782d40effb148131061190fcbe216 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/162723935) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=425)
 
   * 93467113a4a07df9db9885ddee9df234c5f7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11854: [FLINK-17407] Introduce external resource framework

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11854:
URL: https://github.com/apache/flink/pull/11854#issuecomment-617586491


   
   ## CI report:
   
   * bddb0e274da11bbe99d15c6e0bb55e8d8c0e658a UNKNOWN
   * dc7a9c5c7d1fac82518815b9277809dfb82ddaac UNKNOWN
   * 2238559b0e2245e77204e7c7d0ef34c7a97e3766 UNKNOWN
   * b4311ee10a3e6df9a129c2b971231e2312b63c37 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163725498) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=563)
 
   * 4d95d7fd1c806c67f751fc1604ead15cf02ff13a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17315) UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointMassivelyParallel failed in timeout

2020-05-05 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100432#comment-17100432
 ] 

Jark Wu commented on FLINK-17315:
-

I still hit this problem for my pull request CI yesterday: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=631=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=45cc9205-bdb7-5b54-63cd-89fdc0983323

> UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointMassivelyParallel 
> failed in timeout
> -
>
> Key: FLINK-17315
> URL: https://issues.apache.org/jira/browse/FLINK-17315
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Tests
>Affects Versions: 1.11.0
>Reporter: Zhijiang
>Assignee: Arvid Heise
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> Build: 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=45cc9205-bdb7-5b54-63cd-89fdc0983323]
> logs
> {code:java}
> 2020-04-21T20:25:23.1139147Z [ERROR] Errors: 
> 2020-04-21T20:25:23.1140908Z [ERROR]   
> UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointMassivelyParallel:80->execute:87
>  » TestTimedOut
> 2020-04-21T20:25:23.1141383Z [INFO] 
> 2020-04-21T20:25:23.1141675Z [ERROR] Tests run: 1525, Failures: 0, Errors: 1, 
> Skipped: 36
> {code}
>  
> I run it in my local machine and it almost takes about 40 seconds to finish, 
> so the configured 90 seconds timeout seems not enough in heavy load 
> environment sometimes. Maybe we can remove the timeout in tests since azure 
> already configured to monitor the timeout.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 commented on pull request #11914: [FLINK-17385][jdbc][postgres] Handled problem of numeric with 0 precision

2020-05-05 Thread GitBox


bowenli86 commented on pull request #11914:
URL: https://github.com/apache/flink/pull/11914#issuecomment-624422968


   @flinkbot run travis



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-13938) Use pre-uploaded libs to accelerate flink submission

2020-05-05 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-13938:
--
Description: 
Currently, every time we start a flink cluster, flink lib jars need to be 
uploaded to hdfs and then register Yarn local resource so that it could be 
downloaded to jobmanager and all taskmanager container. I think we could have 
two optimizations.
 # Use pre-uploaded flink binary to avoid uploading of flink system jars
 # By default, the LocalResourceVisibility is APPLICATION, so they will be 
downloaded only once and shared for all taskmanager containers of a same 
application in the same node. However, different applications will have to 
download all jars every time, including the flink-dist.jar. We could use the 
yarn public cache to eliminate the unnecessary jars downloading and make 
launching container faster.
 
 

Following the discussion in the user ML. 
[https://lists.apache.org/list.html?u...@flink.apache.org:lte=1M:Flink%20Conf%20%22yarn.flink-dist-jar%22%20Question]

Take both FLINK-13938 and FLINK-14964 into account, this feature will be done 
in the following steps.
 * Enrich "\-yt/--yarnship" to support HDFS directory
 * Add a new config option to control whether to disable the flink-dist 
uploading(*Will be extended to support all files, including lib/plugin/user 
jars/dependencies/etc.*)
 * Enrich "\-yt/--yarnship" to specify local resource visibility. It is 
"APPLICATION" by default. It could be also configured to "PUBLIC", which means 
shared by all applications, or "PRIVATE" which means shared by a same user. 
(*Will be done later according to the feedback*)
  
 How to use this feature?
 1. First, upload the Flink binary and user jars to the HDFS directories
 2. Use "\-yt/–yarnship" to specify the pre-uploaded libs
 3. Disable the automatic uploading of flink-dist via 
{{yarn.submission.automatic-flink-dist-upload}}: false
  
 A final submission command could be issued like following.
{code:java}
./bin/flink run -m yarn-cluster -d \
-yt hdfs://myhdfs/flink/release/flink-1.11 \
-yD yarn.submission.automatic-flink-dist-upload=false \
examples/streaming/WindowJoin.jar
{code}

  was:
Currently, every time we start a flink cluster, flink lib jars need to be 
uploaded to hdfs and then register Yarn local resource so that it could be 
downloaded to jobmanager and all taskmanager container. I think we could have 
two optimizations.
 # Use pre-uploaded flink binary to avoid uploading of flink system jars
 # Use the yarn public cache to eliminate the unnecessary jars downloading and 
make launching container faster. The public cache could be shared by different 
applications.

 

By default, the LocalResourceVisibility is APPLICATION, so they will be 
downloaded only once and shared for all taskmanager containers of a same 
application in the same node. However, different applications will have to 
download all jars every time, including the flink-dist.jar. We could use the 
yarn public cache to eliminate the unnecessary jars downloading and make 
launching container faster.

 

 

Following the discussion in the user ML. 
[https://lists.apache.org/list.html?u...@flink.apache.org:lte=1M:Flink%20Conf%20%22yarn.flink-dist-jar%22%20Question]
 Take both FLINK-13938 and FLINK-14964 into account, this feature will be done 
in the following steps.
 * Enrich "-yt/--yarnship" to support HDFS directory
 * Add a new config option to control whether to disable the flink-dist 
uploading
 * Enrich "-yt/--yarnship" to specify local resource visibility. It is 
"APPLICATION" by default. It could be also configured to "PUBLIC", which means 
shared by all applications, or "PRIVATE" which means shared by a same user. 
(*Will be done later according to the feedback*)
  
 How to use this feature?
 1. First, upload the Flink binary and user jars to the HDFS directories
 2. Use "-yt/–yarnship" to specify the pre-uploaded libs
 3. Disable the automatic uploading of flink-dist via 
{{yarn.submission.automatic-flink-dist-upload}}: false
  
 A final submission command could be issued like following.
{code:java}
./bin/flink run -m yarn-cluster -d \
-yt hdfs://myhdfs/flink/release/flink-1.11 \
-yD yarn.submission.automatic-flink-dist-upload=false \
examples/streaming/WindowJoin.jar
{code}


> Use pre-uploaded libs to accelerate flink submission
> 
>
> Key: FLINK-13938
> URL: https://issues.apache.org/jira/browse/FLINK-13938
> Project: Flink
>  Issue Type: New Feature
>  Components: Client / Job Submission, Deployment / YARN
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, every time we start a flink cluster, flink lib jars need to be 
> uploaded to 

[GitHub] [flink] flinkbot edited a comment on pull request #12003: [FLINK-10934] Support application mode for kubernetes

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #12003:
URL: https://github.com/apache/flink/pull/12003#issuecomment-624416714


   
   ## CI report:
   
   * 068f558f4deeb9ddbb4cb0ea8013bbe099e912cd Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=658)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11985: [FLINK-16989][table] Support ScanTableSource in blink planner

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11985:
URL: https://github.com/apache/flink/pull/11985#issuecomment-623545781


   
   ## CI report:
   
   * 420704211108d661a5d9959e571ce98460ae1897 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=624)
 
   * 6dd7e458809c5c43ce9e51f4381af0b84440526d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11955: [FLINK-17255][python] Add HBase connector descriptor support in PyFlink.

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11955:
URL: https://github.com/apache/flink/pull/11955#issuecomment-621638743


   
   ## CI report:
   
   * 4908dcf28040c99424f7f175f40902473a11578f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=467)
 
   * a8148a8db6a234ae4f9c51f8e9bc81fee80affe2 UNKNOWN
   * ea9f82a3882bbe023e1e8d3cc534b39e4e8cbd26 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=657)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17309) TPC-DS fail to run data generator

2020-05-05 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100426#comment-17100426
 ] 

Leonard Xu commented on FLINK-17309:


Hi, all 

I prepared the PR that add validation and retry logic when the md5sum of 
generator is not matched, I triggered many times in past days and only found 
one fail because network issue as we doubt[1].

And I checked all AZP [2]in past week(AZP number from 4.30.1 to 5.6.1) manually 
and found no TPC-DS failure comes from data generator.

I think it's time to confirm the PR is worked or not, I can polish the PR soon 
if we reach a consensus.

What do you think? [~rmetzger] [~dwysakowicz] [~lzljs3620320]

[1][https://github.com/apache/flink/pull/11867#issuecomment-618413808]

[2][https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=2&_a=summary]
 

> TPC-DS fail to run data generator
> -
>
> Key: FLINK-17309
> URL: https://issues.apache.org/jira/browse/FLINK-17309
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code}
> [INFO] Download data generator success.
> [INFO] 15:53:41 Generating TPC-DS qualification data, this need several 
> minutes, please wait...
> ./dsdgen_linux: line 1: 500:: command not found
> [FAIL] Test script contains errors.
> {code}
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7849=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] RocMarshal edited a comment on pull request #11979: [FLINK-17291][docs] Translate 'docs/training/event_driven.zh.md' to C…

2020-05-05 Thread GitBox


RocMarshal edited a comment on pull request #11979:
URL: https://github.com/apache/flink/pull/11979#issuecomment-624416728


   @klion26 
   Hi, @klion26 .
   I have completed the translation of this page and made corresponding 
improvements according to the suggestions of community members. If you have 
free time, would you please check it for me?
   
   Thank you for your attention.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17536) Change the config option of max limitation to "slotmanager.number-of-slots.max"

2020-05-05 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100422#comment-17100422
 ] 

Yangze Guo commented on FLINK-17536:


[~GJL] Could you assign this to me?

> Change the config option of max limitation to 
> "slotmanager.number-of-slots.max"
> ---
>
> Key: FLINK-17536
> URL: https://issues.apache.org/jira/browse/FLINK-17536
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: Yangze Guo
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17536) Change the config option of max limitation to "slotmanager.number-of-slots.max"

2020-05-05 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-17536:
--

 Summary: Change the config option of max limitation to 
"slotmanager.number-of-slots.max"
 Key: FLINK-17536
 URL: https://issues.apache.org/jira/browse/FLINK-17536
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Configuration
Reporter: Yangze Guo
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] RocMarshal commented on pull request #11979: [FLINK-17291][docs] Translate 'docs/training/event_driven.zh.md' to C…

2020-05-05 Thread GitBox


RocMarshal commented on pull request #11979:
URL: https://github.com/apache/flink/pull/11979#issuecomment-624416728


   @klion26 
   Hi, @klion26 .
   I have completed the translation of this page and made corresponding 
improvements according to the suggestions of community members. If you have 
free time, would you please review it for me?
   
   Thank you for your attention.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11960:
URL: https://github.com/apache/flink/pull/11960#issuecomment-621791651


   
   ## CI report:
   
   * 22afa2e085f3a320890575e4e9bc3802620b93ae Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=481)
 
   * a283855e4c5042bec925a05e15727ab2db71bd1e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=656)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11955: [FLINK-17255][python] Add HBase connector descriptor support in PyFlink.

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11955:
URL: https://github.com/apache/flink/pull/11955#issuecomment-621638743


   
   ## CI report:
   
   * 4908dcf28040c99424f7f175f40902473a11578f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=467)
 
   * a8148a8db6a234ae4f9c51f8e9bc81fee80affe2 UNKNOWN
   * ea9f82a3882bbe023e1e8d3cc534b39e4e8cbd26 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12003: [FLINK-10934] Support application mode for kubernetes

2020-05-05 Thread GitBox


flinkbot commented on pull request #12003:
URL: https://github.com/apache/flink/pull/12003#issuecomment-624416714


   
   ## CI report:
   
   * 068f558f4deeb9ddbb4cb0ea8013bbe099e912cd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17532) Update tests to use BatchTestBase#checkTableResult

2020-05-05 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100420#comment-17100420
 ] 

Jark Wu commented on FLINK-17532:
-

cc [~godfreyhe], [~TsReaper]

> Update tests to use BatchTestBase#checkTableResult
> --
>
> Key: FLINK-17532
> URL: https://issues.apache.org/jira/browse/FLINK-17532
> Project: Flink
>  Issue Type: Test
>  Components: Table SQL / Planner
>Reporter: Timo Walther
>Priority: Major
>  Labels: starter
>
> Roughly 196 tests fail if we change the `Row.toString`. In the legacy 
> planner, we will fix this quickly using some util. However, for the long-term 
> Blink planner we should update those test to use the test bases and compare 
> against instances instead of string. 
> Similar to:
> {code}
> checkResult(
>   "SELECT j, sum(k) FROM GenericTypedTable3 GROUP BY i, j",
>   Seq(
> row(row(1, 1), 2),
> row(row(1, 1), 2),
> row(row(10, 1), 3)
>   )
> )
> {code}
> Affected tests:
> {code}
> AggregateITCaseBase
> PartitionableSinkITCase
> CalcITCase
> JoinITCase
> SortITCase
> CorrelateITCase
> TableSinkITCase
> AggregationITCase
> GroupWindowITCase
> SetOperatorsITCase
> CalcITCase
> UnnestITCase
> AggregateRemoveITCase
> PruneAggregateCallITCase
> CalcITCase
> CorrelateITCase
> TableSinkITCase
> SetOperatorsITCase
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17332) Fix restart policy not equals to Never for native task manager pods

2020-05-05 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen closed FLINK-17332.
-
Resolution: Fixed

master(1.11) via 04ab8d236254777e16bdabfceb217190e3b2cfde

> Fix restart policy not equals to Never for native task manager pods
> ---
>
> Key: FLINK-17332
> URL: https://issues.apache.org/jira/browse/FLINK-17332
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently, we do not explicitly set the {{RestartPolicy}} for the task 
> manager pods in the native K8s setups so that it is {{Always}} by default.  
> The task manager pod itself should not restart the failed Container, the 
> decision should always be made by the job manager.
> Therefore, this ticket proposes to set the {{RestartPolicy}} to {{Never}} for 
> the task manager pods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17535) Treat min/max as part of the hierarchy of config option

2020-05-05 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-17535:
--

 Summary: Treat min/max as part of the hierarchy of config option
 Key: FLINK-17535
 URL: https://issues.apache.org/jira/browse/FLINK-17535
 Project: Flink
  Issue Type: Improvement
Reporter: Yangze Guo


As discussed in 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Should-max-min-be-part-of-the-hierarchy-of-config-option-td40578.html.
 We decide to treat min/max as part of the hierarchy of config option. This 
ticket is an umbrella of all tasks related to it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-web] klion26 commented on a change in pull request #267: [FLINK-13682][docs-zh] Translate "Code Style - Scala Guide" page into Chinese

2020-05-05 Thread GitBox


klion26 commented on a change in pull request #267:
URL: https://github.com/apache/flink-web/pull/267#discussion_r420509939



##
File path: contributing/code-style-and-quality-scala.zh.md
##
@@ -8,68 +8,67 @@ title:  "Apache Flink Code Style and Quality Guide  — Scala"
 
 
 
-## Scala Language Features
+## Scala 语言特性
 
-### Where to use (and not use) Scala
+### 在哪儿使用(和不使用) Scala
 
-**We use Scala for Scala APIs or pure Scala Libraries.**
+**我们使用 Scala 的 API 或者纯 Scala 库。**

Review comment:
   这句话应该是说“对于 Scala 的 API 或者纯 Scala 的 libraries,我们会使用 Scala” 
当然需要再组织下语言,看怎么描述会更合适

##
File path: contributing/code-style-and-quality-scala.zh.md
##
@@ -8,68 +8,67 @@ title:  "Apache Flink Code Style and Quality Guide  — Scala"
 
 
 
-## Scala Language Features
+## Scala 语言特性
 
-### Where to use (and not use) Scala
+### 在哪儿使用(和不使用) Scala
 
-**We use Scala for Scala APIs or pure Scala Libraries.**
+**我们使用 Scala 的 API 或者纯 Scala 库。**
 
-**We do not use Scala in the core APIs and runtime components. We aim to 
remove existing Scala use (code and dependencies) from those components.**
+**在 core API 和 运行时的组件中,我们不使用 Scala。我们的目标是从这些组件中删除现有的 Scala 使用(代码和依赖项)。**
 
-⇒ This is not because we do not like Scala, it is a consequence of “the right 
tool for the right job” approach (see below).
+⇒ 这并不是因为我们不喜欢 Scala,而是考虑到“用正确的工具做正确的事”的结果(见下文)。
 
-For APIs, we develop the foundation in Java, and layer Scala on top.
+对于 API,我们使用 Java 开发基础内容,并在上层使用 Scala。
 
-* This has traditionally given the best interoperability for both Java and 
Scala
-* It does mean dedicated effort to keep the Scala API up to date
+* 这在传统上为 Java 和 Scala 提供了最佳的互通性
+* 这意味着要致力于保持 Scala API 的更新
 
-Why don’t we use Scala in the core APIs and runtime?
+为什么我们不在 core API 和运行时中使用 Scala ?
 
-* The past has shown that Scala evolves too quickly with tricky changes in 
functionality. Each Scala version upgrade was a rather big effort process for 
the Flink community.
-* Scala does not always interact nicely with Java classes, e.g. Scala’s 
visibility scopes work differently and often expose more to Java consumers than 
desired
-* Scala adds an additional layer of complexity to artifact/dependency 
management.
-* We may want to keep Scala dependent libraries like Akka in the runtime, 
but abstract them via an interface and load them in a separate classloader, to 
keep them shielded and avoid version conflicts.
-* Scala makes it very easy for knowledgeable Scala programmers to write code 
that is very hard to understand for programmers that are less knowledgeable in 
Scala. That is especially tricky for an open source project with a broad 
community of diverse experience levels. Working around this means restricting 
the Scala feature set by a lot, which defeats a good amount of the purpose of 
using Scala in the first place.
+* 过去的经验显示, Scala 在功能上的变化太快了。对于 Flink 社区来说,每次 Scala 版本升级都是一个需要付出相当大努力的过程。
+* Scala 并不总能很好地与 Java 的类交互,例如 Scala 的可见性范围的工作方式不同,而且常常向 Java 消费者公开的内容比预期的要多。
+* Scala artifact/dependency 的管理增加了一层额外的复杂性。
+* 我们可能希望在运行时保留像 Akka 这样依赖 Scala 的库,但是要通过接口抽象它们,并在单独的类加载器中加载它们,以保护它们并避免版本冲突。
+* Scala 让懂 Scala 的程序员很容易编写代码,而对于不太懂 Scala 
的程序员来说,这些代码很难理解。对于一个拥有不同经验水平的广大社区的开源项目来说,这尤其棘手。解决这个问题意味着大量限制 Scala 
特性集,这首先就违背了使用 Scala 的很多目的。
 
 
-### API Parity
+### API 同等
 
-Keep Java API and Scala API in sync in terms of functionality and code quality.
+保持 Java API 和 Scala API 在功能和代码质量方面的同步。
 
-The Scala API should cover all the features of the Java APIs as well.
+Scala API 也应该涵盖 Java API 的所有特性。
 
-Scala APIs should have a “completeness test”, like the following example from 
the DataStream API: 
[https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scala](https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scala)
+Scala API 应该有一个“完整性测试”,就如下面 DataStream API 的示例中的一样: 
[https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scala](https://github.com/apache/flink/blob/master/flink-streaming-scala/src/test/scala/org/apache/flink/streaming/api/scala/StreamingScalaAPICompletenessTest.scala)
 
 
-### Language Features
+### 语言特性
 
-* **Avoid Scala implicits.**
-* Scala’s implicits should only be used for user-facing API improvements 
such as the Table API expressions or type information extraction.
-* Don’t use them for internal “magic”.
-* **Add explicit types for class members.**
-* Don’t rely on implicit type inference for class fields and methods 
return types: 
+* **避免 Scala 隐式转换。**
+* Scala 的隐式转换应该只用于面向用户的 API 改进,例如 Table API 表达式或类型信息提取。
+* 不要把它们用于内部 “magic”。
+* **为类成员添加显式类型。**
+* 对于类字段和方法返回类型,不要依赖隐式类型推断:
  
-**Don’t:**
+**不要这样:**
 ```
 var expressions = new java.util.ArrayList[String]()
 ```
 

[jira] [Commented] (FLINK-16743) Introduce datagen, print, blackhole connectors

2020-05-05 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100415#comment-17100415
 ] 

Jingsong Lee commented on FLINK-16743:
--

[~phoenixjiangnan] Thank you for your attention, this should be implemented 
after FLIP-95.

> Introduce datagen, print, blackhole connectors
> --
>
> Key: FLINK-16743
> URL: https://issues.apache.org/jira/browse/FLINK-16743
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Discussion: 
> [http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Introduce-TableFactory-for-StatefulSequenceSource-td39116.html]
> Introduce:
>  * DataGeneratorSource
>  * DataGenTableSourceFactory
>  * PrintTableSinkFactory
>  * BlackHoleTableSinkFactory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17521) Remove `com.ibm.icu` dependency from table-common

2020-05-05 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100414#comment-17100414
 ] 

godfrey he commented on FLINK-17521:


[~twalthr], This dependency aims to solve the problem which reported in 
[FLINK-16464|https://issues.apache.org/jira/browse/FLINK-16464]. When I 
implement [FLINK-16366|https://issues.apache.org/jira/browse/FLINK-16366], I 
discussed offline with the author of whether we need this dependency or we do 
the similar work by ourself. We find there's a lot of lines of code. So I 
introduce `com.ibm.icu` dependency in table-common. I mark the scope of the 
dependency as provided to avoid packaging `com.ibm.icu` into table-common.jar.
If we really want to remove this dependency, I agree we can copy some utility 
methods into table-common.

> Remove `com.ibm.icu` dependency from table-common
> -
>
> Key: FLINK-17521
> URL: https://issues.apache.org/jira/browse/FLINK-17521
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Timo Walther
>Priority: Major
>
> The `com.ibm.icu` dependency has been added recently to the `table-common` 
> module.
> Since this module is used by many connectors and libraries, we should discuss 
> to remove it again.
> Especially, because now there is also another `Row.of()` in the classpath of 
> the API. Which can be very confusing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #11954: [FLINK-17420][table sql / api]Cannot alias Tuple and Row fields when converting DataStream to Table

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11954:
URL: https://github.com/apache/flink/pull/11954#issuecomment-621615631


   
   ## CI report:
   
   * 6a37ff126c35f1d7b35fb50ccff89f200967b2da Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/162875532) 
   * c5e67bf6cfb59353b1109c060b82820920d30ff8 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/164108249) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11960:
URL: https://github.com/apache/flink/pull/11960#issuecomment-621791651


   
   ## CI report:
   
   * 22afa2e085f3a320890575e4e9bc3802620b93ae Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=481)
 
   * a283855e4c5042bec925a05e15727ab2db71bd1e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16998) Add a changeflag to Row type

2020-05-05 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100403#comment-17100403
 ] 

Jingsong Lee commented on FLINK-16998:
--

+1 to keep nested flag.

When introducing data structure, Kurt and I discussed whether NestedRow should 
keep header information. The final conclusion is that we don't need to treat 
external rows differently from nested rows, which will make our design and 
implementation very complicated. The same binary data between BinaryRow and 
NestedRow makes NestedRow's processing easy.

And since we choose to save the flag information in Kafka, filesystem and other 
connectors, the flag information may be nested. may be stored in list, may be 
used anywhere.

> Add a changeflag to Row type
> 
>
> Key: FLINK-16998
> URL: https://issues.apache.org/jira/browse/FLINK-16998
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Major
>
> In Blink planner, the change flag of records travelling through the pipeline 
> are part of the record itself but not part of the logical schema. This 
> simplifies the architecture and API in many cases.
> Which is why we aim adopt the same mechanism for 
> {{org.apache.flink.types.Row}}.
> Take {{tableEnv.toRetractStream()}} as an example that returns either Scala 
> or Java {{Tuple2}}. For FLIP-95 we need to support more update 
> kinds than just a binary boolean.
> This means:
> - Add a changeflag {{RowKind}} to to {{Row}}
> - Update the {{Row.toString()}} method
> - Update serializers in backwards compatible way



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12003: [FLINK-10934] Support application mode for kubernetes

2020-05-05 Thread GitBox


flinkbot commented on pull request #12003:
URL: https://github.com/apache/flink/pull/12003#issuecomment-624411858


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 068f558f4deeb9ddbb4cb0ea8013bbe099e912cd (Wed May 06 
02:34:19 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shuiqiangchen commented on pull request #11955: [FLINK-17255][python] Add HBase connector descriptor support in PyFlink.

2020-05-05 Thread GitBox


shuiqiangchen commented on pull request #11955:
URL: https://github.com/apache/flink/pull/11955#issuecomment-624411530


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on pull request #11979: [FLINK-17291][docs] Translate 'docs/training/event_driven.zh.md' to C…

2020-05-05 Thread GitBox


RocMarshal commented on pull request #11979:
URL: https://github.com/apache/flink/pull/11979#issuecomment-624410277


   Hi, XBaith .
   I have completed the translation of this page and made corresponding 
improvements according to the your suggestions . If you have free time, would 
you please review it for me?
   
   Thank you very much.
   
   Best,
   Roc
   
   
   
   在 2020-05-04 01:25:04,"Xu Bai"  写道:
   
   @XBaith commented on this pull request.
   
   Good job!
   I leave a few suggestions that may could help.
   
   In docs/training/event_driven.zh.md:
   
   >  
   -### Example
   +### 实例
   
   
   Do you mean “示例” instead of “实例”?
   
   In docs/training/event_driven.zh.md:
   
   >  
   -If you've done the
   -[hands-on exercise]({% link training/streaming_analytics.zh.md %}#hands-on)
   -in the [Streaming Analytics training]({% link 
training/streaming_analytics.zh.md %}),
   -you will recall that it uses a `TumblingEventTimeWindow` to compute the sum 
of the tips for
   -each driver during each hour, like this:
   +如果你已经体验了
   +[流式分析训练]({% link training/streaming_analytics.zh.md %})
   +的[动手实践]({% link training/streaming_analytics.zh.md %}#hands-on),
   +你会忆起,它是采用 `TumblingEventTimeWindow` 来计算每个小时内每个司机的小费总和,
   
   
   “忆起”还是说“想起”,我个人觉得“想起”读起来更符合我们平常的说法。亦或者说意译成“你应该记得”
   
   In docs/training/event_driven.zh.md:
   
   > @@ -51,8 +50,8 @@ DataStream> hourlyTips = fares
.process(new AddTips());
{% endhighlight %}

   -It is reasonably straightforward, and educational, to do the same thing 
with a
   -`KeyedProcessFunction`. Let us begin by replacing the code above with this:
   +使用 `KeyedProcessFunction` 去实现相同的效果是合理、直接且有学习意义的。
   
   ⬇️ Suggested change
   -使用 `KeyedProcessFunction` 去实现相同的效果是合理、直接且有学习意义的。
   +使用 `KeyedProcessFunction` 去实现相同的操作更加直接且更有学习意义。
   
   
   In docs/training/event_driven.zh.md:
   
   >  
   -There are several good reasons to want to have more than one output stream 
from a Flink operator, such as reporting:
   +有几个很好的理由希望从 Flink operator 获得多个输出流,如下报告条目:
   
   ⬇️ Suggested change
   -有几个很好的理由希望从 Flink operator 获得多个输出流,如下报告条目:
   +有几个很好的理由希望从 Flink 算子获得多个输出流,如下报告条目:
   
   
   In docs/training/event_driven.zh.md:
   
   >  
   -Another common use case for ProcessFunctions is for expiring stale state. 
If you think back to the
   -[Rides and Fares Exercise](https://github.com/apache/flink-training/tree/{% 
if site.is_stable %}release-{{ site.version_title }}{% else %}master{% endif 
%}/rides-and-fares),
   -where a `RichCoFlatMapFunction` is used to compute a simple join, the 
sample solution assumes that
   -the TaxiRides and TaxiFares are perfectly matched, one-to-one for each 
`rideId`. If an event is lost,
   -the other event for the same `rideId` will be held in state forever. This 
could instead be implemented
   -as a `KeyedCoProcessFunction`, and a timer could be used to detect and 
clear any stale state.
   +ProcessFunctions 的另一个常见用例是过期过时 State。如果你回想一下
   +[Rides and Fares Exercise](https://github.com/apache/flink-training/tree/{% 
if site.is_stable %}release-{{ site.version_title }}{% else %}master{% endif 
%}/rides-and-fares),
   +其中使用 `RichCoFlatMapFunction` 来计算简单 Join,那么示例解决方案假设 TaxiRides 和 TaxiFares 
   +完全匹配,每个 `rideId` 一对一。如果某个事件丢失,则同一 `rideId` 的另一个事件将永远保持 State。
   +这可以作为 `Keyedcomprocessfunction` 实现,并且可以使用计时器来检测和清除任何过时的 State。
   
   ⬇️ Suggested change
   -这可以作为 `Keyedcomprocessfunction` 实现,并且可以使用计时器来检测和清除任何过时的 State。
   +这可以作为 `KeyedCoProcessFunction` 实现,并且可以使用计时器来检测和清除任何过时的 State。
   
   
   —
   You are receiving this because you authored the thread.
   Reply to this email directly, view it on GitHub, or unsubscribe.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal removed a comment on pull request #11979: [FLINK-17291][docs] Translate 'docs/training/event_driven.zh.md' to C…

2020-05-05 Thread GitBox


RocMarshal removed a comment on pull request #11979:
URL: https://github.com/apache/flink/pull/11979#issuecomment-623845807


   Hi, @wuchong . I have completed the translation of this page and made 
corresponding improvements according to the suggestions of community members. 
If you have free time, would you please review it for me?
   
   Thank you  very  much.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 commented on pull request #12003: [FLINK-10934] Support application mode for kubernetes

2020-05-05 Thread GitBox


wangyang0918 commented on pull request #12003:
URL: https://github.com/apache/flink/pull/12003#issuecomment-624409680


   cc @kl0u could you help to review the Kubernetes application mode at your 
convenience?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-10934) Support application mode in kubernetes

2020-05-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10934:
---
Labels: pull-request-available  (was: )

> Support application mode in kubernetes
> --
>
> Key: FLINK-10934
> URL: https://issues.apache.org/jira/browse/FLINK-10934
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission, Deployment / Kubernetes
>Affects Versions: 1.11.0
>Reporter: JIN SUN
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
>
> The Kubernetes do not have a default distributed storage and provide public 
> api to ship files like Yarn local resource. So we could not ship the user 
> jars and files on the client side to jobmanager and taskmanager. Also it is 
> not a common way on Kubernetes. Instead, users usually build their jars and 
> files into the docker image. So when the jobmanager and taskmanager are 
> launched, the users jars already existed.
> Even if some users do not want to build the jars into the image, they could 
> use the initContainer to download the jars from the storage(http/s3/etc.).
> All in all, the Kubernetes per-job cluster will only support cluster 
> deploy-mode(now called "application mode").
>  
> This ticket depends on FLIP-85(Support cluster deployment). Please reference 
> the documentation 
> [FLIP-85|https://cwiki.apache.org/confluence/display/FLINK/FLIP-85+Flink+Application+Mode]
>  for more information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 opened a new pull request #12003: [FLINK-10934] Support application mode for kubernetes

2020-05-05 Thread GitBox


wangyang0918 opened a new pull request #12003:
URL: https://github.com/apache/flink/pull/12003


   
   
   ## What is the purpose of the change
   
   The PR is to support application mode for the Kubernetes. The start command 
is very similar to Yarn integration just like following. Non-ha and HA mode 
both should work as expected.
   ```
   flink run-application -t kubernetes-application \
   -Dkubernetes.cluster-id=${CLUSTER_ID} \
   -Dkubernetes.container.image=${FLINK_IMAGE_NAME} \
   -Djobmanager.memory.process.size=2048m \
   -Dkubernetes.jobmanager.cpu=1 \
   -Dkubernetes.taskmanager.cpu=1 \
   -Dkubernetes.rest-service.exposed.type=NodePort \
   local:///opt/flink/examples/streaming/WindowJoin.jar
   ```
   
   How to specify user jar and classpath?
   In native K8s application, when the jobmanager is launched, all the user jar 
and dependencies have already existed(e.g. built in the image, or downloaded by 
init-container). 
   
   The user jar should be specified with schema `local://`, which means it 
exists in the jobmanager side.
   
   For the dependencies, users could put it in the `$FLINK_HOME/usrlib` 
directory, the jars located in the usrlib will be automatically added to the 
user classpath. They could also specify the user classpath via `-C/--classpath` 
of `flink run-application` command.
   
   
   
   ## Brief change log
   
   * Make flink run-application could support local schema
   * Support application mode for kubernetes
   * Add e2e tests for Kubernetes application mode
   * Set log4j for Kubernetes cli
   
   
   ## Verifying this change
   * The changes is covered by new added UT and e2e 
test(`test_kubernetes_application.sh`)
   * Manually test in a real K8s cluster for the non-HA and HA mode
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (**yes** / no / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (will update the doc in a 
separate ticket)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] liying919 edited a comment on pull request #11982: [FLINK-17289][docs]Translate tutorials/etl.md to chinese

2020-05-05 Thread GitBox


liying919 edited a comment on pull request #11982:
URL: https://github.com/apache/flink/pull/11982#issuecomment-623415071


   > 
   > 
   > I don't think there's a way to fix these Travis failures. But Travis and 
Azure are testing the same. So if Azure is green, you don't need to worry about 
Travis.
   > We are in the process of decommissioning travis.
   
   Got it. Thanks for your reply :) 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11954: [FLINK-17420][table sql / api]Cannot alias Tuple and Row fields when converting DataStream to Table

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11954:
URL: https://github.com/apache/flink/pull/11954#issuecomment-621615631


   
   ## CI report:
   
   * 6a37ff126c35f1d7b35fb50ccff89f200967b2da Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/162875532) 
   * c5e67bf6cfb59353b1109c060b82820920d30ff8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17527) kubernetes-session.sh uses log4j-console.properties

2020-05-05 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100381#comment-17100381
 ] 

Yang Wang commented on FLINK-17527:
---

I am also in favor of unify the logger configuration. When i introduce the 
{{kubernetes-session.sh}} and using {{log4j-console.properties}}, i have the 
following thoughts.

* {{log4j-console.properties}} and {{log4j-yarn-session.properties}} are very 
similar. Both of them output the logs only to console.

* {{log4j-cli.properties}} do not have a corresponding logback configuration 
file. {{logback.xml}}is used instead.

 

So maybe we could remove the {{log4j-yarn-session.properties}} and unify the 
{{log4j-cli.properties}} and {{log4j-console.properties}}. Also the logback 
configuration files also need to be updated.

 

> kubernetes-session.sh uses log4j-console.properties
> ---
>
> Key: FLINK-17527
> URL: https://issues.apache.org/jira/browse/FLINK-17527
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Major
> Fix For: 1.11.0, 1.10.2
>
>
> It is a bit confusing that {{kubernetes-session.sh}} uses the 
> {{log4j-console.properties}}.  At the moment, {{flink}} used 
> {{log4j-cli.properties}}, {{yarn-session.sh}} uses 
> {{log4j-yarn-session.properties}} and {{kubernetes-session.sh}} uses 
> {{log4j-console.properties}}.
> I would suggest to let all scripts use the same logger configuration file 
> (e.g. {{logj4-cli.properties}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12002: [FLINK-16845][connector/common] Add SourceOperator which runs the Source

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #12002:
URL: https://github.com/apache/flink/pull/12002#issuecomment-624393131


   
   ## CI report:
   
   * 4a3d66188bfe8484de52157d59d5850d3559073c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=655)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16998) Add a changeflag to Row type

2020-05-05 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100373#comment-17100373
 ] 

Kurt Young commented on FLINK-16998:


Keep internal row's header makes sense, especially when we want to use a single 
row to represent an update message which essentially consists of two messages: 
update_before and update_after. I would imagine that will be printed as:

+U(-UB(a, 2), +UA(a, 3))

> Add a changeflag to Row type
> 
>
> Key: FLINK-16998
> URL: https://issues.apache.org/jira/browse/FLINK-16998
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Major
>
> In Blink planner, the change flag of records travelling through the pipeline 
> are part of the record itself but not part of the logical schema. This 
> simplifies the architecture and API in many cases.
> Which is why we aim adopt the same mechanism for 
> {{org.apache.flink.types.Row}}.
> Take {{tableEnv.toRetractStream()}} as an example that returns either Scala 
> or Java {{Tuple2}}. For FLIP-95 we need to support more update 
> kinds than just a binary boolean.
> This means:
> - Add a changeflag {{RowKind}} to to {{Row}}
> - Update the {{Row.toString()}} method
> - Update serializers in backwards compatible way



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12002: [FLINK-16845][connector/common] Add SourceOperator which runs the Source

2020-05-05 Thread GitBox


flinkbot commented on pull request #12002:
URL: https://github.com/apache/flink/pull/12002#issuecomment-624393131


   
   ## CI report:
   
   * 4a3d66188bfe8484de52157d59d5850d3559073c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on a change in pull request #11954: [FLINK-17420][table sql / api]Cannot alias Tuple and Row fields when converting DataStream to Table

2020-05-05 Thread GitBox


leonardBang commented on a change in pull request #11954:
URL: https://github.com/apache/flink/pull/11954#discussion_r420492664



##
File path: 
flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/typeutils/FieldInfoUtilsTest.java
##
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.typeutils;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeinfo.Types;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.UnresolvedCallExpression;
+import org.apache.flink.table.expressions.UnresolvedReferenceExpression;
+import org.apache.flink.table.expressions.ValueLiteralExpression;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionIdentifier;
+import org.apache.flink.table.types.DataType;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Arrays;
+
+/**
+ * Test suite for {@link FieldInfoUtils}.
+ */
+public class FieldInfoUtilsTest {
+
+   private static final RowTypeInfo typeInfo = new RowTypeInfo(

Review comment:
   sure





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17534) Update the interfaces to PublicEvolving and add documentation.

2020-05-05 Thread Jiangjie Qin (Jira)
Jiangjie Qin created FLINK-17534:


 Summary: Update the interfaces to PublicEvolving and add 
documentation.
 Key: FLINK-17534
 URL: https://issues.apache.org/jira/browse/FLINK-17534
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Common
Reporter: Jiangjie Qin






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12002: [FLINK-16845][connector/common] Add SourceOperator which runs the Source

2020-05-05 Thread GitBox


flinkbot commented on pull request #12002:
URL: https://github.com/apache/flink/pull/12002#issuecomment-624383369


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4a3d66188bfe8484de52157d59d5850d3559073c (Wed May 06 
00:50:45 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-16845) Implement SourceReaderOperator which runs the SourceReader.

2020-05-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16845:
---
Labels: pull-request-available  (was: )

> Implement SourceReaderOperator which runs the SourceReader.
> ---
>
> Key: FLINK-16845
> URL: https://issues.apache.org/jira/browse/FLINK-16845
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Major
>  Labels: pull-request-available
>
> This ticket should implement the {{SourceReaderOperator}} which runs the 
> {{SourceReader}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin opened a new pull request #12002: [FLINK-16845][connector/common] Add SourceOperator which runs the Source

2020-05-05 Thread GitBox


becketqin opened a new pull request #12002:
URL: https://github.com/apache/flink/pull/12002


   ## What is the purpose of the change
   This PR is a part of FLIP-27.  The patch implements a `SourceOperator` which 
runs the `Source`. 
   
   ## Brief change log
   * Introduced new interface WithOperatorCoordinator - an interface for the 
operators with coordinators.
   * Implemented `SourceOperator` to run `Source`.
   * Added unit tests and improved the `StreamTaskMailboxTestHarness`.
   
   ## Verifying this change
   
   This change can be verified with the following tests.
   org.apache.flink.streaming.api.operators.SourceOperatorTest
   org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTaskTest.java
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (JavaDocs)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17533) Add support for concurrent checkpoints in StateFun

2020-05-05 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman updated FLINK-17533:
-
Summary: Add support for concurrent checkpoints in StateFun  (was: Add 
support for concurrent checkpoints in SateFun)

> Add support for concurrent checkpoints in StateFun
> --
>
> Key: FLINK-17533
> URL: https://issues.apache.org/jira/browse/FLINK-17533
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Affects Versions: statefun-2.0.0
>Reporter: Igal Shilman
>Assignee: Igal Shilman
>Priority: Major
> Fix For: statefun-2.1.0
>
>
> This issue is about adding support for concurrent checkpoints to stateful 
> functions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12001: [FLINK-17204][connectors/rabbitmq] Make RMQ queue declaration consistent between source and sink

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #12001:
URL: https://github.com/apache/flink/pull/12001#issuecomment-624253499


   
   ## CI report:
   
   * cd20d1744653fe796408d892b6c9a7b4560b35ae Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=653)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17529) Replace Deprecated RMQ QueueingConsumer

2020-05-05 Thread Austin Cawley-Edwards (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Cawley-Edwards updated FLINK-17529:
--
Description: The RMQ QueueingConsumer is used in the RMQSource to get a 
simple blocking consumer. This has been deprecated in 
`com.rabbitmq:amqp-client` 4.2.0 and is removed in 5.x. It should be replaced 
by a `com.rabbitmq.client.DefaultConsumer`.  (was: The RMQ QueueingConsumer is 
used in the RMQSource to get a simple blocking consumer. This has been 
deprecated in `com.rabbitmq:amqp-client` 4.2.0 and will be removed in 5.x. It 
should be replaced by a `com.rabbitmq.client.DefaultConsumer`.)

> Replace Deprecated RMQ QueueingConsumer
> ---
>
> Key: FLINK-17529
> URL: https://issues.apache.org/jira/browse/FLINK-17529
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.10.0
>Reporter: Austin Cawley-Edwards
>Priority: Minor
>
> The RMQ QueueingConsumer is used in the RMQSource to get a simple blocking 
> consumer. This has been deprecated in `com.rabbitmq:amqp-client` 4.2.0 and is 
> removed in 5.x. It should be replaced by a 
> `com.rabbitmq.client.DefaultConsumer`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17529) Replace Deprecated RMQ QueueingConsumer

2020-05-05 Thread Austin Cawley-Edwards (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100286#comment-17100286
 ] 

Austin Cawley-Edwards commented on FLINK-17529:
---

Coming from https://issues.apache.org/jira/browse/FLINK-10195 (and the 
associated PR), just replacing the consumer might be an easier first step than 
to do it with that fix.

> Replace Deprecated RMQ QueueingConsumer
> ---
>
> Key: FLINK-17529
> URL: https://issues.apache.org/jira/browse/FLINK-17529
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.10.0
>Reporter: Austin Cawley-Edwards
>Priority: Minor
>
> The RMQ QueueingConsumer is used in the RMQSource to get a simple blocking 
> consumer. This has been deprecated in `com.rabbitmq:amqp-client` 4.2.0 and 
> will be removed in 5.x. It should be replaced by a 
> `com.rabbitmq.client.DefaultConsumer`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17529) Replace Deprecated RMQ QueueingConsumer

2020-05-05 Thread Austin Cawley-Edwards (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Cawley-Edwards updated FLINK-17529:
--
Priority: Minor  (was: Major)

> Replace Deprecated RMQ QueueingConsumer
> ---
>
> Key: FLINK-17529
> URL: https://issues.apache.org/jira/browse/FLINK-17529
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.10.0
>Reporter: Austin Cawley-Edwards
>Priority: Minor
>
> The RMQ QueueingConsumer is used in the RMQSource to get a simple blocking 
> consumer. This has been deprecated in `com.rabbitmq:amqp-client` 4.2.0 and 
> will be removed in 5.x. It should be replaced by a 
> `com.rabbitmq.client.DefaultConsumer`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17529) Replace Deprecated RMQ QueueingConsumer

2020-05-05 Thread Austin Cawley-Edwards (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Cawley-Edwards updated FLINK-17529:
--
Issue Type: Improvement  (was: Bug)

> Replace Deprecated RMQ QueueingConsumer
> ---
>
> Key: FLINK-17529
> URL: https://issues.apache.org/jira/browse/FLINK-17529
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.10.0
>Reporter: Austin Cawley-Edwards
>Priority: Major
>
> The RMQ QueueingConsumer is used in the RMQSource to get a simple blocking 
> consumer. This has been deprecated in `com.rabbitmq:amqp-client` 4.2.0 and 
> will be removed in 5.x. It should be replaced by a 
> `com.rabbitmq.client.DefaultConsumer`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] edu05 commented on pull request #11952: [FLINK-16638][runtime][checkpointing] Flink checkStateMappingCompleteness doesn't include UserDefinedOperatorIDs

2020-05-05 Thread GitBox


edu05 commented on pull request #11952:
URL: https://github.com/apache/flink/pull/11952#issuecomment-624330629


   > Thanks for updating, @edu05.
   > The changes look good to me.
   
   Awesome, who can merge the PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11961: [FLINK-16097] Translate "SQL Client" page of "Table API & SQL" into Chinese

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11961:
URL: https://github.com/apache/flink/pull/11961#issuecomment-621825141


   
   ## CI report:
   
   * c488a03b66e642c5de66dabdb10b3bf40be4ff54 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=639)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17533) Add support for concurrent checkpoints in SateFun

2020-05-05 Thread Igal Shilman (Jira)
Igal Shilman created FLINK-17533:


 Summary: Add support for concurrent checkpoints in SateFun
 Key: FLINK-17533
 URL: https://issues.apache.org/jira/browse/FLINK-17533
 Project: Flink
  Issue Type: Improvement
  Components: Stateful Functions
Affects Versions: statefun-2.0.0
Reporter: Igal Shilman
Assignee: Igal Shilman
 Fix For: statefun-2.1.0


This issue is about adding support for concurrent checkpoints to stateful 
functions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-web] sjwiesman edited a comment on pull request #335: Add Blog: "Flink SQL Demo: Building an End to End Streaming Application"

2020-05-05 Thread GitBox


sjwiesman edited a comment on pull request #335:
URL: https://github.com/apache/flink-web/pull/335#issuecomment-624243328


   Hi @wuchong, I really like this. Can I ask why Flink is not dockerized? I 
think removing all this setup would go a long way towards helping attract new 
users to this post and make it less daunting to follow along. There are 
examples of setting up this sort of environment in the Ververica SQL training 
repo and on Fabian's Github.
   
   Instead of using `wget` to download the docker-compose file we could clone a 
repo that has a docker-compose along with any additional Dockerfiles if need be.
   
   I realize this is larger than normal ask on a blog post but I think it would 
be very beneficial to increasing flink sql's reach. 
   
   [1] https://github.com/ververica/sql-training/
   [2] https://github.com/fhueske/flink-sql-demo
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11999: [FLINK-14100][jdbc] Added Oracle dialect

2020-05-05 Thread GitBox


flinkbot edited a comment on pull request #11999:
URL: https://github.com/apache/flink/pull/11999#issuecomment-624154294


   
   ## CI report:
   
   * d86e75eb3bcafe6695ab845719196447d30b7e18 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=638)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   >