[jira] [Comment Edited] (FLINK-16636) TableEnvironmentITCase is crashing on Travis

2020-04-20 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088308#comment-17088308
 ] 

Caizhi Weng edited comment on FLINK-16636 at 4/21/20, 5:58 AM:
---

Hi [~rmetzger] thanks for reporting. Is the heap dump available now? I didn't 
find heap dump files in the .tar.gz.

>From my current investigation it seems to be caused by memory leaks. According 
>to 
>[this|https://stackoverflow.com/questions/54755846/killing-self-fork-jvm-ping-timeout-elapsed]
> and 
>[this|https://stackoverflow.com/questions/1124771/how-to-solve-java-io-ioexception-error-12-cannot-allocate-memory-calling-run]
> stack overflow posts, maven will fork another JVM process to run tests.

So if the current JVM grabs too much memory, the forked JVM will also require 
this much memory. If the OS cannot allocate this much memory, 
"java.io.IOException: error=12, Cannot allocate memory" will occur. I'm going 
to investigate on why the memory usage is high. It would be helpful if a heap 
dump from jmap or such is available.


was (Author: tsreaper):
Hi [~rmetzger] thanks for reporting. Is the heap dump available now? I didn't 
find heap dump files in the .tar.gz

> TableEnvironmentITCase is crashing on Travis
> 
>
> Key: FLINK-16636
> URL: https://issues.apache.org/jira/browse/FLINK-16636
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Jark Wu
>Assignee: Caizhi Weng
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Here is the instance and exception stack: 
> https://api.travis-ci.org/v3/job/663408376/log.txt
> But there is not too much helpful information there, maybe a accidental maven 
> problem.
> {code}
> 09:55:07.703 [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test 
> (integration-tests) on project flink-table-planner-blink_2.11: There are test 
> failures.
> 09:55:07.703 [ERROR] 
> 09:55:07.703 [ERROR] Please refer to 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire-reports
>  for the individual test results.
> 09:55:07.703 [ERROR] Please refer to dump files (if any exist) [date].dump, 
> [date]-jvmRun[N].dump and [date].dumpstream.
> 09:55:07.703 [ERROR] ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> 09:55:07.703 [ERROR] Command was /bin/sh -c cd 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target 
> && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=1 -XX:+UseG1GC -jar 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire/surefirebooter714252487017838305.jar
>  
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire
>  2020-03-17T09-34-41_826-jvmRun1 surefire4625103637332937565tmp 
> surefire_43192129054983363633tmp
> 09:55:07.703 [ERROR] Error occurred in starting fork, check output in log
> 09:55:07.703 [ERROR] Process Exit Code: 137
> 09:55:07.703 [ERROR] Crashed tests:
> 09:55:07.703 [ERROR] org.apache.flink.table.api.TableEnvironmentITCase
> 09:55:07.703 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 09:55:07.703 [ERROR] Command was /bin/sh -c cd 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target 
> && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=1 -XX:+UseG1GC -jar 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire/surefirebooter714252487017838305.jar
>  
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire
>  2020-03-17T09-34-41_826-jvmRun1 surefire4625103637332937565tmp 
> surefire_43192129054983363633tmp
> 09:55:07.703 [ERROR] Error occurred in starting fork, check output in log
> 09:55:07.703 [ERROR] Process Exit Code: 137
> 09:55:07.703 [ERROR] Crashed tests:
> 09:55:07.703 [ERROR] org.apache.flink.table.api.TableEnvironmentITCase
> 09:55:07.703 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkOnceMultiple(ForkStarter.java:382)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:297)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
> 

[jira] [Closed] (FLINK-17014) Implement PipelinedRegionSchedulingStrategy

2020-04-20 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-17014.
---
Resolution: Fixed

Implemented via
67835fe8b72601dbb046683c04cc9a4cedf77db3
99cbaa929ff9f2f5c387cbf4f76a0166f83a3a8c

> Implement PipelinedRegionSchedulingStrategy
> ---
>
> Key: FLINK-17014
> URL: https://issues.apache.org/jira/browse/FLINK-17014
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The PipelinedRegionSchedulingStrategy submits one pipelined region to the 
> DefaultScheduler each time. The PipelinedRegionSchedulingStrategy must be 
> aware of the inputs of each pipelined region. It should schedule a region if 
> and only if all the inputs of that region become consumable.
> PipelinedRegionSchedulingStrategy can implement as below:
>  * startScheduling() : schedule all source regions one by one.
>  * onPartitionConsumable(partition) : Check all the consumer regions of the 
> notified partition, if all the inputs of a region have turned to be 
> consumable, schedule the region
>  * restartTasks(tasksToRestart) : find out all regions which contain the 
> tasks to restart, reschedule those whose inputs are all consumable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 commented on issue #11820: [FLINK-17264][scripts] Fix broken taskmanager.sh by using -eq to compare num_lines

2020-04-20 Thread GitBox


wangyang0918 commented on issue #11820:
URL: https://github.com/apache/flink/pull/11820#issuecomment-616966774


   I am not sure how many users are running e2e tests on the Mac. And how many 
incompatible commands do we have in e2e tests. So maybe we need to create 
another ticket and have the discussion there.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on a change in pull request #11702: [FLINK-16667][python][client] Support new Python dependency configuration options in flink-client.

2020-04-20 Thread GitBox


dianfu commented on a change in pull request #11702:
URL: https://github.com/apache/flink/pull/11702#discussion_r411877184



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/cli/ProgramOptions.java
##
@@ -184,10 +175,65 @@ public SavepointRestoreSettings 
getSavepointRestoreSettings() {
return savepointSettings;
}
 
-   /**
-* Indicates whether the job is a Python job.
-*/
-   public boolean isPython() {
-   return isPython;
+   public void applyToConfiguration(Configuration configuration) {
+   if (getParallelism() != ExecutionConfig.PARALLELISM_DEFAULT) {
+   
configuration.setInteger(CoreOptions.DEFAULT_PARALLELISM, getParallelism());
+   }
+
+   configuration.setBoolean(DeploymentOptions.ATTACHED, 
!getDetachedMode());
+   
configuration.setBoolean(DeploymentOptions.SHUTDOWN_IF_ATTACHED, 
isShutdownOnAttachedExit());
+   ConfigUtils.encodeCollectionToConfig(configuration, 
PipelineOptions.CLASSPATHS, getClasspaths(), URL::toString);
+   
SavepointRestoreSettings.toConfiguration(getSavepointRestoreSettings(), 
configuration);
+   }
+
+   public static ProgramOptions create(CommandLine line) throws 
CliArgsException {
+   if (isPython(line) || containsPythonDependencyOptions(line)) {
+   return createPythonPropramOptions(line);

Review comment:
   typo: createPythonPropramOptions  -> createPythonProgramOptions

##
File path: 
flink-python/src/main/java/org/apache/flink/client/cli/PythonProgramOptions.java
##
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.client.cli;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.python.util.PythonDependencyUtils;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.Option;
+
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import static org.apache.flink.client.cli.CliFrontendParser.ARGS_OPTION;
+import static org.apache.flink.client.cli.CliFrontendParser.PYMODULE_OPTION;
+import static org.apache.flink.client.cli.CliFrontendParser.PY_OPTION;
+
+/**
+ * The class for command line options that refer to a Python program or JAR 
program with Python command line options.
+ */
+public class PythonProgramOptions extends ProgramOptions {
+
+   private final Configuration pythonConfiguration;
+
+   private final boolean isPython;
+
+   public PythonProgramOptions(CommandLine line) throws CliArgsException {
+   super(line);
+   isPython = isPython(line);

Review comment:
   The name `isPython` is confusing. Considering the class 
`PythonProgramOptions` is specify for Python, should it always be true? I guess 
you mean `isEntryPointPython`? If so we should rename the variable `isPython` 
and the util method `isPython` to `isEntryPointPython`. What do you think?

##
File path: 
flink-clients/src/main/java/org/apache/flink/client/cli/ProgramOptions.java
##
@@ -184,10 +175,65 @@ public SavepointRestoreSettings 
getSavepointRestoreSettings() {
return savepointSettings;
}
 
-   /**
-* Indicates whether the job is a Python job.
-*/
-   public boolean isPython() {
-   return isPython;
+   public void applyToConfiguration(Configuration configuration) {
+   if (getParallelism() != ExecutionConfig.PARALLELISM_DEFAULT) {
+   
configuration.setInteger(CoreOptions.DEFAULT_PARALLELISM, getParallelism());
+   }
+
+   configuration.setBoolean(DeploymentOptions.ATTACHED, 
!getDetachedMode());
+   
configuration.setBoolean(DeploymentOptions.SHUTDOWN_IF_ATTACHED, 
isShutdownOnAttachedExit());
+   ConfigUtils.encodeCollectionToConfig(configuration, 
PipelineOptions.CLASSPATHS, getClasspaths(), URL::toString);
+   
SavepointRestoreSettings.toConfiguration(getSavepointRestoreSettings(), 
configuration);
+   }
+
+   

[GitHub] [flink] wuchong commented on a change in pull request #11822: [FLINK-16788][Connector]ElasticSearch Connector SQL DDL add optional config (eg: username/password)

2020-04-20 Thread GitBox


wuchong commented on a change in pull request #11822:
URL: https://github.com/apache/flink/pull/11822#discussion_r411878291



##
File path: 
flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/table/descriptors/Elasticsearch.java
##
@@ -93,6 +95,26 @@ public Elasticsearch host(String hostname, int port, String 
protocol) {
return this;
}
 
+   /**
+* The Elasticsearch Cluster userName.
+*
+* @param userName Elasticsearch userName
+*/
+   public Elasticsearch userName(String userName) {

Review comment:
   lower case for N
   ```suggestion
public Elasticsearch username(String username) {
   ```

##
File path: 
flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/table/descriptors/Elasticsearch.java
##
@@ -93,6 +95,26 @@ public Elasticsearch host(String hostname, int port, String 
protocol) {
return this;
}
 
+   /**
+* The Elasticsearch Cluster userName.

Review comment:
   ditto

##
File path: 
flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/table/descriptors/ElasticsearchValidator.java
##
@@ -76,12 +78,18 @@ public void validate(DescriptorProperties properties) {
properties.validateValue(CONNECTOR_TYPE, 
CONNECTOR_TYPE_VALUE_ELASTICSEARCH, false);
validateVersion(properties);
validateHosts(properties);
+   validateAuth(properties);
validateGeneralProperties(properties);
validateFailureHandler(properties);
validateBulkFlush(properties);
validateConnectionProperties(properties);
}
 
+   private void validateAuth(DescriptorProperties properties) {
+   properties.validateString(CONNECTOR_USERNAME, true);
+   properties.validateString(CONNECTOR_PASSWORD, true);

Review comment:
   We should validate that both username and password should exist if one 
of them is configured. 

##
File path: 
flink-connectors/flink-connector-elasticsearch6/src/main/java/org/apache/flink/streaming/connectors/elasticsearch6/Elasticsearch6UpsertTableSink.java
##
@@ -189,6 +207,67 @@ protected ElasticsearchUpsertTableSinkBase copy(
// Helper classes
// 

 
+   /**
+* This class implements {@link RestClientFactory}, used for es with 
authentication.
+*/
+   static class AuthRestClientFactory implements RestClientFactory {
+
+   private String userName;
+
+   private String password;
+
+   private Integer maxRetryTimeout;
+
+   private String pathPrefix;
+
+   private transient CredentialsProvider credentialsProvider;
+
+   public AuthRestClientFactory(@Nullable String userName, 
@Nullable String password,

Review comment:
   `username` and `password` are not nullable, because you will use 
`AuthRestClientFactory` iff username and password are set. 

##
File path: 
flink-connectors/flink-connector-elasticsearch6/src/main/java/org/apache/flink/streaming/connectors/elasticsearch6/Elasticsearch6UpsertTableSink.java
##
@@ -159,12 +165,24 @@ protected ElasticsearchUpsertTableSinkBase copy(
Optional.ofNullable(sinkOptions.get(BULK_FLUSH_BACKOFF_DELAY))
.ifPresent(v -> 
builder.setBulkFlushBackoffDelay(Long.valueOf(v)));
 
-   builder.setRestClientFactory(
-   new DefaultRestClientFactory(
+   if 
(Optional.ofNullable(sinkOptions.get(CONNECTOR_USERNAME)).isPresent() &&
+   
Optional.ofNullable(sinkOptions.get(CONNECTOR_PASSWORD)).isPresent()) {
+   builder.setRestClientFactory(new AuthRestClientFactory(
+   sinkOptions.get(CONNECTOR_USERNAME),
+   sinkOptions.get(CONNECTOR_PASSWORD),

Optional.ofNullable(sinkOptions.get(REST_MAX_RETRY_TIMEOUT))
.map(Integer::valueOf)
.orElse(null),
-   sinkOptions.get(REST_PATH_PREFIX)));
+   sinkOptions.get(REST_PATH_PREFIX)
+   ));
+   } else {
+   builder.setRestClientFactory(
+   new DefaultRestClientFactory(
+   
Optional.ofNullable(sinkOptions.get(REST_MAX_RETRY_TIMEOUT))
+   .map(Integer::valueOf)
+   .orElse(null),
+   sinkOptions.get(REST_PATH_PREFIX)));
+ 

[GitHub] [flink] flinkbot edited a comment on issue #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11837:
URL: https://github.com/apache/flink/pull/11837#issuecomment-616956896


   
   ## CI report:
   
   * afa79d577eecd36ad2843f50380c07a33bedaa6c Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161174787) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7825)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11835: [hotfix][table sql planner/table sql legacy planner]fix icu license in NOTICE file.

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11835:
URL: https://github.com/apache/flink/pull/11835#issuecomment-616914851


   
   ## CI report:
   
   * 3f4e611749de42b1c7b18a7a421879e2a55379d1 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161161009) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7812)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11820: [FLINK-17264][scripts] Fix broken taskmanager.sh by using -eq to compare num_lines

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11820:
URL: https://github.com/apache/flink/pull/11820#issuecomment-616475056


   
   ## CI report:
   
   * 56a03626e5f03a65e74708c424fd8503ba563a32 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161065593) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7794)
 
   * bb53e79998e8bdc17c602aa399ef1a4df3e42875 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161174769) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7824)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11578: [RELEASE-1.10][FLINK-16874] Respect the dynamic options when calculating memory opt…

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11578:
URL: https://github.com/apache/flink/pull/11578#issuecomment-606518441


   
   ## CI report:
   
   * e4a8b6583841ddaa4abb405d7988b6e5883c085d Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/160523697) 
   * ae19b37903d8116e3457103ea4dac98fb8a85a23 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161174685) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16636) TableEnvironmentITCase is crashing on Travis

2020-04-20 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088308#comment-17088308
 ] 

Caizhi Weng commented on FLINK-16636:
-

Hi [~rmetzger] thanks for reporting. Is the heap dump available now? I didn't 
find heap dump files in the .tar.gz

> TableEnvironmentITCase is crashing on Travis
> 
>
> Key: FLINK-16636
> URL: https://issues.apache.org/jira/browse/FLINK-16636
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Jark Wu
>Assignee: Caizhi Weng
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Here is the instance and exception stack: 
> https://api.travis-ci.org/v3/job/663408376/log.txt
> But there is not too much helpful information there, maybe a accidental maven 
> problem.
> {code}
> 09:55:07.703 [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test 
> (integration-tests) on project flink-table-planner-blink_2.11: There are test 
> failures.
> 09:55:07.703 [ERROR] 
> 09:55:07.703 [ERROR] Please refer to 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire-reports
>  for the individual test results.
> 09:55:07.703 [ERROR] Please refer to dump files (if any exist) [date].dump, 
> [date]-jvmRun[N].dump and [date].dumpstream.
> 09:55:07.703 [ERROR] ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> 09:55:07.703 [ERROR] Command was /bin/sh -c cd 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target 
> && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=1 -XX:+UseG1GC -jar 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire/surefirebooter714252487017838305.jar
>  
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire
>  2020-03-17T09-34-41_826-jvmRun1 surefire4625103637332937565tmp 
> surefire_43192129054983363633tmp
> 09:55:07.703 [ERROR] Error occurred in starting fork, check output in log
> 09:55:07.703 [ERROR] Process Exit Code: 137
> 09:55:07.703 [ERROR] Crashed tests:
> 09:55:07.703 [ERROR] org.apache.flink.table.api.TableEnvironmentITCase
> 09:55:07.703 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 09:55:07.703 [ERROR] Command was /bin/sh -c cd 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target 
> && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=1 -XX:+UseG1GC -jar 
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire/surefirebooter714252487017838305.jar
>  
> /home/travis/build/apache/flink/flink-table/flink-table-planner-blink/target/surefire
>  2020-03-17T09-34-41_826-jvmRun1 surefire4625103637332937565tmp 
> surefire_43192129054983363633tmp
> 09:55:07.703 [ERROR] Error occurred in starting fork, check output in log
> 09:55:07.703 [ERROR] Process Exit Code: 137
> 09:55:07.703 [ERROR] Crashed tests:
> 09:55:07.703 [ERROR] org.apache.flink.table.api.TableEnvironmentITCase
> 09:55:07.703 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkOnceMultiple(ForkStarter.java:382)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:297)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> 09:55:07.704 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> 09:55:07.704 [ERROR] at 
> 

[GitHub] [flink] KarmaGYZ commented on a change in pull request #11822: [FLINK-16788][Connector]ElasticSearch Connector SQL DDL add optional config (eg: username/password)

2020-04-20 Thread GitBox


KarmaGYZ commented on a change in pull request #11822:
URL: https://github.com/apache/flink/pull/11822#discussion_r411364910



##
File path: 
flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/table/descriptors/ElasticsearchValidator.java
##
@@ -76,12 +78,18 @@ public void validate(DescriptorProperties properties) {
properties.validateValue(CONNECTOR_TYPE, 
CONNECTOR_TYPE_VALUE_ELASTICSEARCH, false);
validateVersion(properties);
validateHosts(properties);
+   validateAuth(properties);
validateGeneralProperties(properties);
validateFailureHandler(properties);
validateBulkFlush(properties);
validateConnectionProperties(properties);
}
 
+   private void validateAuth(DescriptorProperties properties) {
+   properties.validateString(CONNECTOR_USERNAME, true);
+   properties.validateString(CONNECTOR_PASSWORD, true);

Review comment:
   I think it makes sense to set the minLen of the user name to 1. Besides, 
if the properties contain `CONNECTOR_USERNAME`, it seems the 
`CONNECTOR_PASSWORD` is not optional.

##
File path: 
flink-connectors/flink-connector-elasticsearch6/src/main/java/org/apache/flink/streaming/connectors/elasticsearch6/Elasticsearch6UpsertTableSink.java
##
@@ -189,6 +207,67 @@ protected ElasticsearchUpsertTableSinkBase copy(
// Helper classes
// 

 
+   /**
+* This class implements {@link RestClientFactory}, used for es with 
authentication.
+*/
+   static class AuthRestClientFactory implements RestClientFactory {

Review comment:
   Does it make sense to add userName and password as two optional fields 
to the `DefaultRestClientFactory` instead of introducing this class?

##
File path: 
flink-connectors/flink-connector-elasticsearch7/src/main/java/org/apache/flink/streaming/connectors/elasticsearch7/Elasticsearch7UpsertTableSink.java
##
@@ -211,6 +227,59 @@ protected ElasticsearchUpsertTableSinkBase copy(
// Helper classes
// 

 
+   /**
+* This class implements {@link RestClientFactory}, used for es with 
authentication.
+*/
+   static class AuthRestClientFactory implements RestClientFactory {

Review comment:
   Same as above.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on issue #11835: [hotfix][table sql planner/table sql legacy planner]fix icu license in NOTICE file.

2020-04-20 Thread GitBox


leonardBang commented on issue #11835:
URL: https://github.com/apache/flink/pull/11835#issuecomment-616958951


   Hello @aljoscha ,could you take a look this PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on issue #11820: [FLINK-17264][scripts] Fix broken taskmanager.sh by using -eq to compare num_lines

2020-04-20 Thread GitBox


leonardBang commented on issue #11820:
URL: https://github.com/apache/flink/pull/11820#issuecomment-616957287


   > Sorry for introducing this bug in the code. I discover that there are also 
other scripts (mostly e2e test scripts, for example `test_cli.sh` and 
`shade.sh`) uses `wc -l` and `==` to compare numbers. Shall we fix them too?
   
   The root cause is that tabs alignment 
https://apple.stackexchange.com/questions/370366/why-is-wc-c-printing-spaces-before-the-number
 
   If these e2e tests fails in macOS(mostly dev environment), I think we should 
open a jira to track and repair too.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11835: [hotfix][table sql planner/table sql legacy planner]fix icu license in NOTICE file.

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11835:
URL: https://github.com/apache/flink/pull/11835#issuecomment-616914851


   
   ## CI report:
   
   * 3f4e611749de42b1c7b18a7a421879e2a55379d1 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161161009) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7812)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on issue #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…

2020-04-20 Thread GitBox


flinkbot commented on issue #11837:
URL: https://github.com/apache/flink/pull/11837#issuecomment-616956896


   
   ## CI report:
   
   * afa79d577eecd36ad2843f50380c07a33bedaa6c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11820: [FLINK-17264][scripts] Fix broken taskmanager.sh by using -eq to compare num_lines

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11820:
URL: https://github.com/apache/flink/pull/11820#issuecomment-616475056


   
   ## CI report:
   
   * 56a03626e5f03a65e74708c424fd8503ba563a32 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161065593) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7794)
 
   * bb53e79998e8bdc17c602aa399ef1a4df3e42875 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11578: [RELEASE-1.10][FLINK-16874] Respect the dynamic options when calculating memory opt…

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11578:
URL: https://github.com/apache/flink/pull/11578#issuecomment-606518441


   
   ## CI report:
   
   * e4a8b6583841ddaa4abb405d7988b6e5883c085d Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/160523697) 
   * ae19b37903d8116e3457103ea4dac98fb8a85a23 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * f20368de0bdc14ae26ea92256137a9551bdb0879 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161162880) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7814)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on a change in pull request #11836: [FLINK-17188][python] Use pip instead of conda to install flake8 and sphinx

2020-04-20 Thread GitBox


dianfu commented on a change in pull request #11836:
URL: https://github.com/apache/flink/pull/11836#discussion_r411841130



##
File path: flink-python/dev/lint-python.sh
##
@@ -244,6 +244,7 @@ function install_py_env() {
 # Install tox.
 # In some situations,you need to run the script with "sudo". e.g. sudo 
./lint-python.sh
 function install_tox() {
+source ${CONDA_HOME}/bin/activate

Review comment:
   Remove the brace





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17013) Support Python UDTF in old planner under batch mode

2020-04-20 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-17013.
---
Resolution: Resolved

> Support Python UDTF in old planner under batch mode
> ---
>
> Key: FLINK-17013
> URL: https://issues.apache.org/jira/browse/FLINK-17013
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, Python UDTF has been supported under flink planner(only stream) 
> and blink planner. This jira dedicates to add Python UDTF support for flink 
> planner under batch mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17013) Support Python UDTF in old planner under batch mode

2020-04-20 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088298#comment-17088298
 ] 

Hequn Cheng commented on FLINK-17013:
-

Resolved in 1.11.0 via bda9b7765471f62665fd1823be527ed0c9c9bb48

> Support Python UDTF in old planner under batch mode
> ---
>
> Key: FLINK-17013
> URL: https://issues.apache.org/jira/browse/FLINK-17013
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, Python UDTF has been supported under flink planner(only stream) 
> and blink planner. This jira dedicates to add Python UDTF support for flink 
> planner under batch mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 removed a comment on issue #11668: [FLINK-17013][python] Support Python UDTF in old planner under batch mode

2020-04-20 Thread GitBox


hequn8128 removed a comment on issue #11668:
URL: https://github.com/apache/flink/pull/11668#issuecomment-616952438


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hequn8128 commented on issue #11668: [FLINK-17013][python] Support Python UDTF in old planner under batch mode

2020-04-20 Thread GitBox


hequn8128 commented on issue #11668:
URL: https://github.com/apache/flink/pull/11668#issuecomment-616952779


   @HuangXingBo Thanks a lot for the update. Merging...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hequn8128 commented on issue #11668: [FLINK-17013][python] Support Python UDTF in old planner under batch mode

2020-04-20 Thread GitBox


hequn8128 commented on issue #11668:
URL: https://github.com/apache/flink/pull/11668#issuecomment-616952438


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 commented on issue #11820: [FLINK-17264][scripts] Fix broken taskmanager.sh by using -eq to compare num_lines

2020-04-20 Thread GitBox


wangyang0918 commented on issue #11820:
URL: https://github.com/apache/flink/pull/11820#issuecomment-616950599


   Thanks @aljoscha @leonardBang, i have updated the PR to use `-ne`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] TsReaper commented on issue #11820: [FLINK-17264][scripts] Fix broken taskmanager.sh by using -eq to compare num_lines

2020-04-20 Thread GitBox


TsReaper commented on issue #11820:
URL: https://github.com/apache/flink/pull/11820#issuecomment-616950764


   Sorry for introducing this bug in the code. I discover that there are also 
other scripts (mostly e2e test scripts, for example `test_cli.sh` and 
`shade.sh`) uses `wc -l` and `==` to compare numbers. Shall we fix them too?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on issue #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…

2020-04-20 Thread GitBox


flinkbot commented on issue #11837:
URL: https://github.com/apache/flink/pull/11837#issuecomment-616946674


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit afa79d577eecd36ad2843f50380c07a33bedaa6c (Tue Apr 21 
04:35:39 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-17286) Integrate json to file system connector

2020-04-20 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-17286:


Assignee: Leonard Xu

> Integrate json to file system connector
> ---
>
> Key: FLINK-17286
> URL: https://issues.apache.org/jira/browse/FLINK-17286
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem, Formats (JSON, Avro, Parquet, 
> ORC, SequenceFile)
>Reporter: Jingsong Lee
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11829: [FLINK-17021][table-planner-blink] Blink batch planner set GlobalDataExchangeMode

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11829:
URL: https://github.com/apache/flink/pull/11829#issuecomment-616561190


   
   ## CI report:
   
   * b55c1d368238599e824f600bd4786ca2ed31f681 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161046863) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7791)
 
   * c01e235c3048c97accdd33f7cfe2b03f6f60c8b3 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161169854) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7822)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11794: [FLINK-17126] [table-planner] Correct the execution behavior of BatchTableEnvironment

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11794:
URL: https://github.com/apache/flink/pull/11794#issuecomment-615243118


   
   ## CI report:
   
   * 5bcc43b771b5c05d027268b52cf81121db562a7a Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/160968796) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7744)
 
   * 7c2cf88b04594bb73875fa8bb6ea5aa60d7a1bd9 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161169815) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7821)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17286) Integrate json to file system connector

2020-04-20 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-17286:


 Summary: Integrate json to file system connector
 Key: FLINK-17286
 URL: https://issues.apache.org/jira/browse/FLINK-17286
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / FileSystem, Formats (JSON, Avro, Parquet, 
ORC, SequenceFile)
Reporter: Jingsong Lee
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11797: [FLINK-17169][table-blink] Refactor BaseRow to use RowKind instead of byte header

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11797:
URL: https://github.com/apache/flink/pull/11797#issuecomment-615294694


   
   ## CI report:
   
   * 85f40e3041783b1dbda1eb3b812f23e77936f7b3 UNKNOWN
   * 5b8d39bef382e260dbec301105c32dde88153245 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161160982) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7811)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11774: [FLINK-17020][runtime] Introduce GlobalDataExchangeMode for JobGraph generation

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11774:
URL: https://github.com/apache/flink/pull/11774#issuecomment-614576955


   
   ## CI report:
   
   * 1fe605210a2a1331d1ecf85c952ad2d5bd5fe8ea Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/160532764) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7590)
 
   * 1f72fb850f69449f4ef886ec0cad8a0644bab93d Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161169787) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7820)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16931) Large _metadata file lead to JobManager not responding when restart

2020-04-20 Thread Lu Niu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088282#comment-17088282
 ] 

Lu Niu commented on FLINK-16931:


[~trohrmann] Thanks for the advice. Certainly there are much more need to be 
considered. [~pnowojski] , If you think we could collaborate on this, please 
let me know the action plan. thanks!

> Large _metadata file lead to JobManager not responding when restart
> ---
>
> Key: FLINK-16931
> URL: https://issues.apache.org/jira/browse/FLINK-16931
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / Coordination
>Affects Versions: 1.9.2, 1.10.0, 1.11.0
>Reporter: Lu Niu
>Assignee: Lu Niu
>Priority: Critical
> Fix For: 1.11.0
>
>
> When _metadata file is big, JobManager could never recover from checkpoint. 
> It fall into a loop that fetch checkpoint -> JM timeout -> restart. Here is 
> related log: 
> {code:java}
>  2020-04-01 17:08:25,689 INFO 
> org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - 
> Recovering checkpoints from ZooKeeper.
>  2020-04-01 17:08:25,698 INFO 
> org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - Found 
> 3 checkpoints in ZooKeeper.
>  2020-04-01 17:08:25,698 INFO 
> org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - 
> Trying to fetch 3 checkpoints from storage.
>  2020-04-01 17:08:25,698 INFO 
> org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - 
> Trying to retrieve checkpoint 50.
>  2020-04-01 17:08:48,589 INFO 
> org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - 
> Trying to retrieve checkpoint 51.
>  2020-04-01 17:09:12,775 INFO org.apache.flink.yarn.YarnResourceManager - The 
> heartbeat of JobManager with id 02500708baf0bb976891c391afd3d7d5 timed out.
> {code}
> Digging into the code, looks like ExecutionGraph::restart runs in JobMaster 
> main thread and finally calls 
> ZooKeeperCompletedCheckpointStore::retrieveCompletedCheckpoint which download 
> file form DFS. The main thread is basically blocked for a while because of 
> this. One possible solution is to making the downloading part async. More 
> things might need to consider as the original change tries to make it 
> single-threaded. [https://github.com/apache/flink/pull/7568]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] docete opened a new pull request #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…

2020-04-20 Thread GitBox


docete opened a new pull request #11837:
URL: https://github.com/apache/flink/pull/11837


   …ork for TableEnvironment.connect().createTemporaryTable()
   
   
   ## What is the purpose of the change
   Since FLINK-14490, proctime()/rowtime() doesn't work  for 
TableEnvironment.connect().createTemporaryTable(), The root cause is:
   - proctime()/rowtime() are used along with 
DefinedRowtimeAttributes/DefinedProctimeAttribute and ConnectorCatalogTable.  
The original code path stores the ConnectorCatalogTable object in Catalog and 
in validate phrase, the RowType is derived from ConnectorCatalogTable.getSchema 
which contains time indicator. After FLINK-14490, we store CatalogTableImpl 
object in Catalog and in validate phrase, the RowType is derived from 
CatalogTableImpl.getSchema which doesn't contain time indicator.
   - In SqlToRel phrase, FlinkCalciteCatalogReader converts 
ConnectorCatalogTable to TableSourceTable and converts CatalogTable to 
CatalogSourceTable. The TableSourceTable would be converted to LogicalTableScan 
directly and contains time indicator. Otherwise the CatalogSourceTable would be 
converted to a LogicalTableScan whose time indicator is erased(by FLINK-16345).
   This PR fix it.
   
   ## Brief change log
   - instantiate the TableSource in CatalogSchemaTable and check if it's a 
DefinedRowtimeAttributes/DefinedProctimeAttribute instance. If so, rewrite the 
TableSchema to patch the time indicator(as it is in 
ConnectorCatalogTable#calculateSourceSchema)
   - Avoid erasing time indicator in CatalogSourceTable if the TableSource is a 
DefinedRowtimeAttributes/DefinedProctimeAttribute instance
   
   ## Verifying this change
   
   This change added tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / docs / 
**JavaDocs** / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16160) Schema#proctime and Schema#rowtime don't work in TableEnvironment#connect code path

2020-04-20 Thread Zhenghua Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088275#comment-17088275
 ] 

Zhenghua Gao commented on FLINK-16160:
--

The root causes are:
 * proctime()/rowtime() are used along with 
DefinedRowtimeAttributes/DefinedProctimeAttribute and ConnectorCatalogTable.  
The original code path stores the ConnectorCatalogTable object in Catalog and 
in validate phrase, the RowType is derived from ConnectorCatalogTable.getSchema 
which contains time indicator. After FLINK-14490, we store CatalogTableImpl 
object in Catalog and in validate phrase, the RowType is derived from 
CatalogTableImpl.getSchema which doesn't contain time indicator.
 * In SqlToRel phrase, FlinkCalciteCatalogReader converts ConnectorCatalogTable 
to TableSourceTable and converts CatalogTable to CatalogSourceTable. The 
TableSourceTable would be converted to LogicalTableScan directly and contains 
time indicator. Otherwise the CatalogSourceTable would be converted to a 
LogicalTableScan whose time indicator is erased(by FLINK-16345).

The solution is straightforward:
 * We should instantiate the TableSource in CatalogSchemaTable and check if 
it's a DefinedRowtimeAttributes/DefinedProctimeAttribute instance. If so, 
rewrite the TableSchema to patch the time indicator(as it is in 
ConnectorCatalogTable#calculateSourceSchema). This will pass the validation.
 * Avoid erasing time indicator in CatalogSourceTable if the TableSource is a 
DefinedRowtimeAttributes/DefinedProctimeAttribute instance

> Schema#proctime and Schema#rowtime don't work in TableEnvironment#connect 
> code path
> ---
>
> Key: FLINK-16160
> URL: https://issues.apache.org/jira/browse/FLINK-16160
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In ConnectTableDescriptor#createTemporaryTable, the proctime/rowtime 
> properties are ignored so the generated catalog table is not correct. We 
> should fix this to let TableEnvironment#connect() support watermark.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17273) Fix not calling ResourceManager#closeTaskManagerConnection in KubernetesResourceManager in case of registered TaskExecutor failure

2020-04-20 Thread Canbin Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088276#comment-17088276
 ] 

Canbin Zheng commented on FLINK-17273:
--

cc [~xintongsong]

> Fix not calling ResourceManager#closeTaskManagerConnection in 
> KubernetesResourceManager in case of registered TaskExecutor failure
> --
>
> Key: FLINK-17273
> URL: https://issues.apache.org/jira/browse/FLINK-17273
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
> Fix For: 1.11.0
>
>
> At the moment, the {{KubernetesResourceManager}} does not call the method of 
> {{ResourceManager#closeTaskManagerConnection}} once it detects that a 
> currently registered task executor has failed. This ticket propoeses to fix 
> this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-training] tzulitai commented on issue #2: [FLINK-17276][checkstyle] add definitions from Flink and enforce them

2020-04-20 Thread GitBox


tzulitai commented on issue #2:
URL: https://github.com/apache/flink-training/pull/2#issuecomment-616942594


     always good to have checkstyle enforced, no objections here as well. LGTM 
from my side.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-training] tzulitai commented on issue #4: [FLINK-17278][travis] add Travis configuration

2020-04-20 Thread GitBox


tzulitai commented on issue #4:
URL: https://github.com/apache/flink-training/pull/4#issuecomment-616941495


   Travis is green, so LGTM  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11829: [FLINK-17021][table-planner-blink] Blink batch planner set GlobalDataExchangeMode

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11829:
URL: https://github.com/apache/flink/pull/11829#issuecomment-616561190


   
   ## CI report:
   
   * b55c1d368238599e824f600bd4786ca2ed31f681 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161046863) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7791)
 
   * c01e235c3048c97accdd33f7cfe2b03f6f60c8b3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11785: [FLINK-17206][table] refactor function catalog to support delayed UDF initialization.

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11785:
URL: https://github.com/apache/flink/pull/11785#issuecomment-615087743


   
   ## CI report:
   
   * bda5c47abf20ea3682c2b1d21188d1e08edd1d87 UNKNOWN
   * 7fee2a180eea4d0e37ff86f04d728b5b501a3679 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161164657) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7817)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11774: [FLINK-17020][runtime] Introduce GlobalDataExchangeMode for JobGraph generation

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11774:
URL: https://github.com/apache/flink/pull/11774#issuecomment-614576955


   
   ## CI report:
   
   * 1fe605210a2a1331d1ecf85c952ad2d5bd5fe8ea Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/160532764) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7590)
 
   * 1f72fb850f69449f4ef886ec0cad8a0644bab93d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11794: [FLINK-17126] [table-planner] Correct the execution behavior of BatchTableEnvironment

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11794:
URL: https://github.com/apache/flink/pull/11794#issuecomment-615243118


   
   ## CI report:
   
   * 5bcc43b771b5c05d027268b52cf81121db562a7a Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/160968796) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7744)
 
   * 7c2cf88b04594bb73875fa8bb6ea5aa60d7a1bd9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-training] tzulitai commented on a change in pull request #1: [FLINK-17275] port core training exercises, descriptions, solutions and tests

2020-04-20 Thread GitBox


tzulitai commented on a change in pull request #1:
URL: https://github.com/apache/flink-training/pull/1#discussion_r411849722



##
File path: README.md
##
@@ -1,3 +1,257 @@
+
+
 # Flink Training Exercises
 
 Exercises that go along with the training content in the documentation.
+
+## Table of Contents
+
+[**Setup your Development Environment**](#setup-your-development-environment)
+
+1. [Software requirements](#software-requirements)
+1. [Clone and build the flink-training 
project](#clone-and-build-the-flink-training-project)
+1. [Import the flink-training project into your 
IDE](#import-the-flink-training-project-into-your-ide)
+1. [Download the data sets](#download-the-data-sets)
+
+[**Using the Taxi Data Streams**](#using-the-taxi-data-streams)
+
+1. [Schema of Taxi Ride Events](#schema-of-taxi-ride-events)
+1. [Generating Taxi Ride Data Streams in a Flink 
program](#generating-taxi-ride-data-streams-in-a-flink-program)
+
+[**How to do the Labs**](#how-to-do-the-labs)
+
+1. [Learn about the data](#learn-about-the-data)
+1. [Modify `ExerciseBase`](#modify-exercisebase)
+1. [Run and debug Flink programs in your 
IDE](#run-and-debug-flink-programs-in-your-ide)
+1. [Exercises, Tests, and Solutions](#exercises-tests-and-solutions)
+
+[**Labs**](LABS-OVERVIEW.md)
+
+## Setup your Development Environment
+
+The following instructions guide you through the process of setting up a 
development environment for the purpose of developing, debugging, and executing 
solutions to the Flink developer training exercises and examples.
+
+### Software requirements
+
+Flink supports Linux, OS X, and Windows as development environments for Flink 
programs and local execution. The following software is required for a Flink 
development setup and should be installed on your system:
+
+- a JDK for Java 8 or Java 11 (a JRE is not sufficient; other versions of Java 
are not supported)
+- Git
+- an IDE for Java (and/or Scala) development with Gradle support.
+  We recommend IntelliJ, but Eclipse or Visual Studio Code can also be used so 
long as you stick to Java. For Scala you will need to use IntelliJ (and its 
Scala plugin).
+
+> **:information_source: Note for Windows users:** Many of the examples of 
shell commands provided in the training instructions are for UNIX systems. To 
make things easier, you may find it worthwhile to setup cygwin or WSL. For 
developing Flink jobs, Windows works reasonably well: you can run a Flink 
cluster on a single machine, submit jobs, run the webUI, and execute jobs in 
the IDE.
+
+### Clone and build the flink-training project
+
+This `flink-training` project contains exercises, tests, and reference 
solutions for the programming exercises. Clone the `flink-training` project 
from Github and build it.
+
+> **:information_source: Repository Layout:** This repository has several 
branches set up pointing to different Apache Flink versions, similarly to the 
[apache/flink](https://github.com/apache/flink) repository with:
+> - a release branch for each minor version of Apache Flink, e.g. 
`release-1.10`, and
+> - a `master` branch that points to the current Flink release (not 
`flink:master`!)
+>
+> If you want to work on a version other than the current Flink release, make 
sure to check out the appropriate branch.
+
+```bash
+git clone https://github.com/apache/flink-training.git
+cd flink-training
+./gradlew test shadowJar
+```
+
+If you haven’t done this before, at this point you’ll end up downloading all 
of the dependencies for this Flink training project. This usually takes a few 
minutes, depending on the speed of your internet connection.
+
+If all of the tests pass and the build is successful, you are off to a good 
start.
+
+
+Users in China: click here for instructions about using a 
local maven mirror.
+
+If you are in China, we recommend configuring the maven repository to use a 
mirror. You can do this by uncommenting the appropriate line in our 
[`build.gradle`](build.gradle) like this:
+
+```groovy
+repositories {
+// for access from China, you may need to uncomment this line
+maven { url 'http://maven.aliyun.com/nexus/content/groups/public/' }
+mavenCentral()
+}
+```
+
+
+
+### Import the flink-training project into your IDE
+
+The project needs to be imported as a gradle project into your IDE.
+
+Once that’s done you should be able to open 
[`RideCleansingTest`](ride-cleansing/src/test/java/org/apache/flink/training/exercises/ridecleansing/RideCleansingTest.java)
 and successfully run this test.
+
+> **:information_source: Note for Scala users:** You will need to use IntelliJ 
with the JetBrains Scala plugin, and you will need to add a Scala 2.12 SDK to 
the Global Libraries section of the Project Structure. IntelliJ will ask you 
for the latter when you open a Scala file.
+
+### Download the data sets
+
+You will also need to download the taxi data files used in this training by 
running the following commands
+
+```bash

[GitHub] [flink] wuchong commented on a change in pull request #11766: [FLINK-16812][jdbc] support array types in PostgresRowConverter

2020-04-20 Thread GitBox


wuchong commented on a change in pull request #11766:
URL: https://github.com/apache/flink/pull/11766#discussion_r411851777



##
File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/source/row/converter/PostgresRowConverter.java
##
@@ -28,4 +34,39 @@
public PostgresRowConverter(RowType rowType) {
super(rowType);
}
+
+   @Override
+   public JDBCFieldConverter createConverter(LogicalType type) {
+   LogicalTypeRoot root = type.getTypeRoot();
+
+   if (root == LogicalTypeRoot.ARRAY) {
+   ArrayType arrayType = (ArrayType) type;
+   LogicalTypeRoot elemType = 
arrayType.getElementType().getTypeRoot();
+
+   if (elemType == LogicalTypeRoot.VARBINARY) {
+
+   return v -> {
+   PgArray pgArray = (PgArray) v;
+   Object[] in = (Object[]) 
pgArray.getArray();
+
+   Object[] out = new Object[in.length];
+   for (int i = 0; i < in.length; i++) {
+   out[i] = ((PGobject) 
in[i]).getValue().getBytes();
+   }
+
+   return out;
+   };
+   } else {
+   return v -> ((PgArray) v).getArray();

Review comment:
   Should we add the default conversion for ARRAY in 
`AbstractJDBCRowConverter`? Currently, we will directly put `java.sql.Array` 
into the Row which is not correct. 

##
File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/source/row/converter/PostgresRowConverter.java
##
@@ -28,4 +34,39 @@
public PostgresRowConverter(RowType rowType) {
super(rowType);
}
+
+   @Override
+   public JDBCFieldConverter createConverter(LogicalType type) {
+   LogicalTypeRoot root = type.getTypeRoot();
+
+   if (root == LogicalTypeRoot.ARRAY) {
+   ArrayType arrayType = (ArrayType) type;
+   LogicalTypeRoot elemType = 
arrayType.getElementType().getTypeRoot();
+
+   if (elemType == LogicalTypeRoot.VARBINARY) {
+
+   return v -> {
+   PgArray pgArray = (PgArray) v;
+   Object[] in = (Object[]) 
pgArray.getArray();
+
+   Object[] out = new Object[in.length];
+   for (int i = 0; i < in.length; i++) {
+   out[i] = ((PGobject) 
in[i]).getValue().getBytes();
+   }

Review comment:
   Could you add a comment why PG should special handle `ARRAY`?

##
File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/source/row/converter/PostgresRowConverter.java
##
@@ -28,4 +34,39 @@
public PostgresRowConverter(RowType rowType) {
super(rowType);
}
+
+   @Override
+   public JDBCFieldConverter createConverter(LogicalType type) {
+   LogicalTypeRoot root = type.getTypeRoot();
+
+   if (root == LogicalTypeRoot.ARRAY) {
+   ArrayType arrayType = (ArrayType) type;
+   LogicalTypeRoot elemType = 
arrayType.getElementType().getTypeRoot();
+
+   if (elemType == LogicalTypeRoot.VARBINARY) {

Review comment:
   I would suggest to use `LogicalTypeChecks#hasFamily(elemType, 
LogicalTypeFamily.BINARY_STRING)` to also support `BINARY`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #11804: [FLINK-16473][doc][jdbc] add documentation for JDBCCatalog and PostgresCatalog

2020-04-20 Thread GitBox


wuchong commented on a change in pull request #11804:
URL: https://github.com/apache/flink/pull/11804#discussion_r411846216



##
File path: docs/dev/table/catalogs.md
##
@@ -37,6 +41,76 @@ Or permanent metadata, like that in a Hive Metastore. 
Catalogs provide a unified
 
 The `GenericInMemoryCatalog` is an in-memory implementation of a catalog. All 
objects will be available only for the lifetime of the session.
 
+### JDBCCatalog
+
+The `JDBCCatalog` enables users to connect Flink to relational databases over 
JDBC protocol.
+
+ PostgresCatalog
+
+`PostgresCatalog` is the only implementation of JDBC Catalog at the moment.
+
+To set a `JDBCcatalog`,

Review comment:
   ```suggestion
   To set a `JDBCatalog`,
   ```

##
File path: docs/dev/table/catalogs.md
##
@@ -37,6 +41,76 @@ Or permanent metadata, like that in a Hive Metastore. 
Catalogs provide a unified
 
 The `GenericInMemoryCatalog` is an in-memory implementation of a catalog. All 
objects will be available only for the lifetime of the session.
 
+### JDBCCatalog
+
+The `JDBCCatalog` enables users to connect Flink to relational databases over 
JDBC protocol.
+
+ PostgresCatalog
+
+`PostgresCatalog` is the only implementation of JDBC Catalog at the moment.
+
+To set a `JDBCcatalog`,
+
+
+
+{% highlight java %}
+
+EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
+TableEnvironment tableEnv = TableEnvironment.create(settings);
+
+String name= "mypg";
+String defaultDatabase = "mydb";
+String username= "...";
+String password= "...";
+String baseUrl = "jdbc:postgresql://:"; # should not contain 
database name here

Review comment:
   ```suggestion
   String baseUrl = "jdbc:postgresql://:"; // should not 
contain database name here
   ```

##
File path: docs/dev/table/catalogs.md
##
@@ -37,6 +41,76 @@ Or permanent metadata, like that in a Hive Metastore. 
Catalogs provide a unified
 
 The `GenericInMemoryCatalog` is an in-memory implementation of a catalog. All 
objects will be available only for the lifetime of the session.
 
+### JDBCCatalog
+
+The `JDBCCatalog` enables users to connect Flink to relational databases over 
JDBC protocol.
+
+ PostgresCatalog
+
+`PostgresCatalog` is the only implementation of JDBC Catalog at the moment.
+
+To set a `JDBCcatalog`,
+
+
+
+{% highlight java %}
+
+EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
+TableEnvironment tableEnv = TableEnvironment.create(settings);
+
+String name= "mypg";
+String defaultDatabase = "mydb";
+String username= "...";
+String password= "...";
+String baseUrl = "jdbc:postgresql://:"; # should not contain 
database name here
+
+JDBCCatalog catalog = new JDBCCatalog(name, defaultDatabase, username, 
password, baseUrl);
+tableEnv.registerCatalog("mypg", catalog);
+
+// set the JDBCCatalog as the current catalog of the session
+tableEnv.useCatalog("mypg");
+{% endhighlight %}
+
+
+{% highlight scala %}
+
+val settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
+val tableEnv = TableEnvironment.create(settings)
+
+val name= "mypg";
+val defaultDatabase = "mydb";
+val username= "...";
+val password= "...";
+val baseUrl = "jdbc:postgresql://:"; # should not contain 
database name here
+
+val catalog = new JDBCCatalog(name, defaultDatabase, username, password, 
baseUrl);
+tableEnv.registerCatalog("mypg", catalog);
+
+// set the JDBCCatalog as the current catalog of the session
+tableEnv.useCatalog("mypg");
+{% endhighlight %}
+
+
+{% highlight yaml %}
+
+execution:
+planner: blink
+...
+current-catalog: mypg  # set the JDBCCatalog as the current catalog of the 
session
+current-database: mydb
+
+catalogs:
+   - name: mypg
+ type: jdbc
+ default-database: mydb
+ username: ...
+ password: ...
+ base-url: jdbc:postgresql://:
+{% endhighlight %}

Review comment:
   Could you add some descriptions for the parameters? For example, what's 
meaning of the parameters, is it required or optional?

##
File path: docs/dev/table/catalogs.md
##
@@ -37,6 +41,76 @@ Or permanent metadata, like that in a Hive Metastore. 
Catalogs provide a unified
 
 The `GenericInMemoryCatalog` is an in-memory implementation of a catalog. All 
objects will be available only for the lifetime of the session.
 
+### JDBCCatalog
+
+The `JDBCCatalog` enables users to connect Flink to relational databases over 
JDBC protocol.
+
+ PostgresCatalog
+
+`PostgresCatalog` is the only implementation of JDBC Catalog at the moment.
+
+To set a `JDBCcatalog`,
+
+
+
+{% highlight java %}
+
+EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
+TableEnvironment tableEnv = 

[GitHub] [flink] wuchong commented on a change in pull request #11804: [FLINK-16473][doc][jdbc] add documentation for JDBCCatalog and PostgresCatalog

2020-04-20 Thread GitBox


wuchong commented on a change in pull request #11804:
URL: https://github.com/apache/flink/pull/11804#discussion_r411846216



##
File path: docs/dev/table/catalogs.md
##
@@ -37,6 +41,76 @@ Or permanent metadata, like that in a Hive Metastore. 
Catalogs provide a unified
 
 The `GenericInMemoryCatalog` is an in-memory implementation of a catalog. All 
objects will be available only for the lifetime of the session.
 
+### JDBCCatalog
+
+The `JDBCCatalog` enables users to connect Flink to relational databases over 
JDBC protocol.
+
+ PostgresCatalog
+
+`PostgresCatalog` is the only implementation of JDBC Catalog at the moment.
+
+To set a `JDBCcatalog`,

Review comment:
   ```suggestion
   To set a `JDBCCatalog`,
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on issue #11764: [FLINK-17023][scripts] Fix format checking of extractExecutionParams in config.sh

2020-04-20 Thread GitBox


leonardBang commented on issue #11764:
URL: https://github.com/apache/flink/pull/11764#issuecomment-616935579


   @wangyang0918 has opened a PR https://github.com/apache/flink/pull/11820



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on issue #11764: [FLINK-17023][scripts] Fix format checking of extractExecutionParams in config.sh

2020-04-20 Thread GitBox


leonardBang commented on issue #11764:
URL: https://github.com/apache/flink/pull/11764#issuecomment-616934346


   @tillrohrmann same error in my macOS ...
   I think using `if ! [[ ${num_lines} -eq 1 ]] ` to check number also works
I'd like to fix the soon



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11785: [FLINK-17206][table] refactor function catalog to support delayed UDF initialization.

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11785:
URL: https://github.com/apache/flink/pull/11785#issuecomment-615087743


   
   ## CI report:
   
   * bda5c47abf20ea3682c2b1d21188d1e08edd1d87 UNKNOWN
   * e55bf1e1dc93609a9303b36e2a53a9941c6a2515 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161005972) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7766)
 
   * 7fee2a180eea4d0e37ff86f04d728b5b501a3679 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161164657) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7817)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11823: [FLINK-16412][hive] Disallow embedded metastore in HiveCatalog produc…

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11823:
URL: https://github.com/apache/flink/pull/11823#issuecomment-616533916


   
   ## CI report:
   
   * b07a02f49b062c7aa965c8b084585679bfd2716e Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161038172) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7783)
 
   * 9c2f9e65d3046ee911f0494ccf20dbcdc92f9ddf Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161164708) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7818)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #10228: [FLINK-14816] Add thread dump feature for taskmanager

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #10228:
URL: https://github.com/apache/flink/pull/10228#issuecomment-554599522


   
   ## CI report:
   
   * 45a8d72869eee0e3eafdc007280725e8043f2521 UNKNOWN
   * c0a3c6e519d1793383794c797064039fa66b90d2 UNKNOWN
   * 69b2f84d8f611aaa55a6d366be0c0abd11ef8d73 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161097781) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7806)
 
   * 737ce43f7dea5d3e6429a4d75c597630506deb1f Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161164393) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7816)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #11797: [FLINK-17169][table-blink] Refactor BaseRow to use RowKind instead of byte header

2020-04-20 Thread GitBox


wuchong commented on a change in pull request #11797:
URL: https://github.com/apache/flink/pull/11797#discussion_r411840734



##
File path: flink-core/src/main/java/org/apache/flink/types/RowKind.java
##
@@ -47,10 +47,69 @@
 * needs to retract the previous row first. OR it describes an 
idempotent update, i.e., an update
 * of a row that is uniquely identifiable by a key.
 */
-   UPDATE_AFTER,
+   UPDATE_AFTER("UA", (byte) 2),
 
/**
 * Deletion operation.
 */
-   DELETE
+   DELETE("D", (byte) 3);
+
+   private final String shortString;
+
+   private final byte value;
+
+   /**
+* Creates a {@link RowKind} enum with the given short string and byte 
value representation of
+* the {@link RowKind}.
+*/
+   RowKind(String shortString, byte value) {

Review comment:
   cc @twalthr , could you have a look the changes for RowKind?
   
   Using a byte value representation will be much faster than using enum 
ordinal during de/serialization. In my local benchmark, byte value is 24x fater 
than ordianl. The disadvantage is that the IDEA code completion show some 
verbose information (`DELETE("D", (byte) 3)`). 
   
   Benchmark code: 
https://github.com/wuchong/my-benchmark/blob/master/src/main/java/myflink/EnumBenchmark.java
   
   Benchmark Result:
   
   ```
   # Run complete. Total time: 00:03:35
   
   Benchmark   Mode  Cnt  Score  Error   Units
   EnumBenchmark.testOrdinal  thrpt   20876.048 ±   18.128  ops/ms
   EnumBenchmark.testValuethrpt   20  20827.764 ± 2084.072  ops/ms
   ```
   
   
![image](https://user-images.githubusercontent.com/5378924/79822112-ff662600-83c2-11ea-9fd6-9ba3fafba927.png)
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #11797: [FLINK-17169][table-blink] Refactor BaseRow to use RowKind instead of byte header

2020-04-20 Thread GitBox


wuchong commented on a change in pull request #11797:
URL: https://github.com/apache/flink/pull/11797#discussion_r411840734



##
File path: flink-core/src/main/java/org/apache/flink/types/RowKind.java
##
@@ -47,10 +47,69 @@
 * needs to retract the previous row first. OR it describes an 
idempotent update, i.e., an update
 * of a row that is uniquely identifiable by a key.
 */
-   UPDATE_AFTER,
+   UPDATE_AFTER("UA", (byte) 2),
 
/**
 * Deletion operation.
 */
-   DELETE
+   DELETE("D", (byte) 3);
+
+   private final String shortString;
+
+   private final byte value;
+
+   /**
+* Creates a {@link RowKind} enum with the given short string and byte 
value representation of
+* the {@link RowKind}.
+*/
+   RowKind(String shortString, byte value) {

Review comment:
   cc @twalthr , could you have a look the changes for RowKind?
   
   Using a byte value representation will be much faster than using enum 
ordinal during de/serialization. In my local benchmark, byte value is 24x fater 
than ordianl. The disadvantage is that the IDEA code completion show some 
verbose information. 
   
   Benchmark code: 
https://github.com/wuchong/my-benchmark/blob/master/src/main/java/myflink/EnumBenchmark.java
   
   Benchmark Result:
   
   ```
   # Run complete. Total time: 00:03:35
   
   Benchmark   Mode  Cnt  Score  Error   Units
   EnumBenchmark.testOrdinal  thrpt   20876.048 ±   18.128  ops/ms
   EnumBenchmark.testValuethrpt   20  20827.764 ± 2084.072  ops/ms
   ```
   
   
![image](https://user-images.githubusercontent.com/5378924/79822112-ff662600-83c2-11ea-9fd6-9ba3fafba927.png)
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17273) Fix not calling ResourceManager#closeTaskManagerConnection in KubernetesResourceManager in case of registered TaskExecutor failure

2020-04-20 Thread Zili Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088255#comment-17088255
 ] 

Zili Chen commented on FLINK-17273:
---

[~trohrmann][~fly_in_gis]

This seems a fast fail path when TM(Pod) failed, which we did in YARN & Mesos 
code path. It would be better you also have a look.



> Fix not calling ResourceManager#closeTaskManagerConnection in 
> KubernetesResourceManager in case of registered TaskExecutor failure
> --
>
> Key: FLINK-17273
> URL: https://issues.apache.org/jira/browse/FLINK-17273
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
> Fix For: 1.11.0
>
>
> At the moment, the {{KubernetesResourceManager}} does not call the method of 
> {{ResourceManager#closeTaskManagerConnection}} once it detects that a 
> currently registered task executor has failed. This ticket propoeses to fix 
> this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17166) Modify the log4j-console.properties to also output logs into the files for WebUI

2020-04-20 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088254#comment-17088254
 ] 

Yang Wang commented on FLINK-17166:
---

After a long consideration, i think we could use {{System.setOut}} and 
{{System.setErr}} to redirect the {{PrintStream}} to log4j logger. Then we 
could output the stdout/stderr to console and file at the same time. I think 
this is a better solution and will attach a PR to implement.

> Modify the log4j-console.properties to also output logs into the files for 
> WebUI
> 
>
> Key: FLINK-17166
> URL: https://issues.apache.org/jira/browse/FLINK-17166
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: Andrey Zagrebin
>Assignee: Yang Wang
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17273) Fix not calling ResourceManager#closeTaskManagerConnection in KubernetesResourceManager in case of registered TaskExecutor failure

2020-04-20 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-17273:
-

Assignee: Canbin Zheng

> Fix not calling ResourceManager#closeTaskManagerConnection in 
> KubernetesResourceManager in case of registered TaskExecutor failure
> --
>
> Key: FLINK-17273
> URL: https://issues.apache.org/jira/browse/FLINK-17273
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
> Fix For: 1.11.0
>
>
> At the moment, the {{KubernetesResourceManager}} does not call the method of 
> {{ResourceManager#closeTaskManagerConnection}} once it detects that a 
> currently registered task executor has failed. This ticket propoeses to fix 
> this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17120) Support Cython Optimizing Python Operations

2020-04-20 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-17120.
---
Resolution: Resolved

Merged to master via 6b80935b28756122cb43053494abd765a1508934

> Support Cython Optimizing Python Operations
> ---
>
> Key: FLINK-17120
> URL: https://issues.apache.org/jira/browse/FLINK-17120
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Support Cython Optimizing Python Operations



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lamber-ken commented on a change in pull request #10228: [FLINK-14816] Add thread dump feature for taskmanager

2020-04-20 Thread GitBox


lamber-ken commented on a change in pull request #10228:
URL: https://github.com/apache/flink/pull/10228#discussion_r411839773



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/JvmUtils.java
##
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.util;
+
+import java.io.ByteArrayInputStream;
+import java.io.InputStream;
+import java.io.SequenceInputStream;
+import java.lang.management.ManagementFactory;
+import java.lang.management.ThreadMXBean;
+import java.nio.charset.StandardCharsets;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Utilities for {@link java.lang.management.ManagementFactory}.
+ */
+public final class JvmUtils {
+
+   /**
+* Returns the thread info for all live threads with stack trace and 
synchronization information.
+*
+* @return the thread dump stream of current JVM
+*/
+   public static InputStream threadDumpStream() {
+   ThreadMXBean threadMxBean = ManagementFactory.getThreadMXBean();
+
+   List streams = Arrays
+   .stream(threadMxBean.dumpAllThreads(true, true))
+   .map((v) -> 
v.toString().getBytes(StandardCharsets.UTF_8))
+   .map(ByteArrayInputStream::new)
+   .collect(Collectors.toList());

Review comment:
   hi, here return `Inputstream`, the following three types are processed 
as streams.
   
   ```
   public enum FileType {
 LOG,
 STDOUT,
 THREAD_DUMP
   }
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11785: [FLINK-17206][table] refactor function catalog to support delayed UDF initialization.

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11785:
URL: https://github.com/apache/flink/pull/11785#issuecomment-615087743


   
   ## CI report:
   
   * bda5c47abf20ea3682c2b1d21188d1e08edd1d87 UNKNOWN
   * e55bf1e1dc93609a9303b36e2a53a9941c6a2515 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161005972) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7766)
 
   * 7fee2a180eea4d0e37ff86f04d728b5b501a3679 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11823: [FLINK-16412][hive] Disallow embedded metastore in HiveCatalog produc…

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11823:
URL: https://github.com/apache/flink/pull/11823#issuecomment-616533916


   
   ## CI report:
   
   * b07a02f49b062c7aa965c8b084585679bfd2716e Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161038172) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7783)
 
   * 9c2f9e65d3046ee911f0494ccf20dbcdc92f9ddf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11836: [FLINK-17188][python] Use pip instead of conda to install flake8 and sphinx

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11836:
URL: https://github.com/apache/flink/pull/11836#issuecomment-616922166


   
   ## CI report:
   
   * 6df508f9c18e15b1a4c15a691c5e71500d32262d Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161162947) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7815)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 9c84a49b5dfcafda82951da969e169cbe7b15645 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161084792) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7804)
 
   * f20368de0bdc14ae26ea92256137a9551bdb0879 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161162880) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7814)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lamber-ken commented on a change in pull request #10228: [FLINK-14816] Add thread dump feature for taskmanager

2020-04-20 Thread GitBox


lamber-ken commented on a change in pull request #10228:
URL: https://github.com/apache/flink/pull/10228#discussion_r411837822



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/JvmUtils.java
##
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.util;
+
+import java.io.ByteArrayInputStream;
+import java.io.InputStream;
+import java.io.SequenceInputStream;
+import java.lang.management.ManagementFactory;
+import java.lang.management.ThreadMXBean;
+import java.nio.charset.StandardCharsets;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Utilities for {@link java.lang.management.ManagementFactory}.
+ */
+public final class JvmUtils {
+
+   /**
+* Returns the thread info for all live threads with stack trace and 
synchronization information.
+*
+* @return the thread dump stream of current JVM
+*/
+   public static InputStream threadDumpStream() {
+   ThreadMXBean threadMxBean = ManagementFactory.getThreadMXBean();
+
+   List streams = Arrays
+   .stream(threadMxBean.dumpAllThreads(true, true))
+   .map((v) -> 
v.toString().getBytes(StandardCharsets.UTF_8))
+   .map(ByteArrayInputStream::new)
+   .collect(Collectors.toList());

Review comment:
   Hi, as follows:
   
   1. Definition `ThreadMXBean#dumpAllThreads`
   ```
   public ThreadInfo[] dumpAllThreads(boolean lockedMonitors, boolean 
lockedSynchronizers);
   ```
   
   2. Each ThreadInfo output
   ```
   "Monitor Ctrl-Break" Id=5 RUNNABLE
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:375)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at java.net.Socket.(Socket.java:434)
at java.net.Socket.(Socket.java:211)
at 
com.intellij.rt.execution.application.AppMainV2$1.run(AppMainV2.java:59)
   ```
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #10228: [FLINK-14816] Add thread dump feature for taskmanager

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #10228:
URL: https://github.com/apache/flink/pull/10228#issuecomment-554599522


   
   ## CI report:
   
   * 45a8d72869eee0e3eafdc007280725e8043f2521 UNKNOWN
   * c0a3c6e519d1793383794c797064039fa66b90d2 UNKNOWN
   * 69b2f84d8f611aaa55a6d366be0c0abd11ef8d73 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161097781) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7806)
 
   * 737ce43f7dea5d3e6429a4d75c597630506deb1f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on a change in pull request #11768: [FLINK-16943][python] Support set the configuration option "pipeline.jars" in PyFlink.

2020-04-20 Thread GitBox


dianfu commented on a change in pull request #11768:
URL: https://github.com/apache/flink/pull/11768#discussion_r411828146



##
File path: docs/dev/table/python/dependency_management.md
##
@@ -22,7 +22,24 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-If third-party dependencies are used, you can specify the dependencies with 
the following Python Table APIs or through command line arguments directly when submitting the 
job.
+# Java Dependency Management

Review comment:
   What about `Java Dependency`?

##
File path: docs/dev/table/python/dependency_management.md
##
@@ -22,7 +22,24 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-If third-party dependencies are used, you can specify the dependencies with 
the following Python Table APIs or through command line arguments directly when submitting the 
job.
+# Java Dependency Management
+
+If third-party Java dependencies are used, you can using following code to add 
jars for your Python job.
+
+{% highlight python %}
+# Set jar urls in "pipeline.jars". The jars will be uploaded to the cluster.
+# NOTE: Only local file urls (start with "file://") are supported.
+table_env.get_config.set_configuration("pipeline.jars", 
"file:///my/jar/path/connector.jar;file:///my/jar/path/udf.jar")
+
+# Set jar urls in "pipeline.classpaths". The jars will be added to the 
classpath of the cluster.
+# Users should ensure the urls are accessible on both the local client and the 
cluster.
+# NOTE: The supported schemes includes: file,ftp,http,https,jar. "hdfs" is not 
supported by default.
+table_env.get_config.set_configuration("pipeline.classpaths", 
"file:///my/jar/path/connector.jar;file:///my/jar/path/udf.jar")
+{% endhighlight %}
+
+# Python Dependency Management

Review comment:
   What about `Python Dependency`?

##
File path: docs/dev/table/python/dependency_management.md
##
@@ -22,7 +22,24 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-If third-party dependencies are used, you can specify the dependencies with 
the following Python Table APIs or through command line arguments directly when submitting the 
job.
+# Java Dependency Management
+
+If third-party Java dependencies are used, you can using following code to add 
jars for your Python job.

Review comment:
   Users could also specify the Java dependencies via command line 
arguments, could we add a link for that?

##
File path: docs/dev/table/python/dependency_management.md
##
@@ -22,7 +22,24 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-If third-party dependencies are used, you can specify the dependencies with 
the following Python Table APIs or through command line arguments directly when submitting the 
job.
+# Java Dependency Management
+
+If third-party Java dependencies are used, you can using following code to add 
jars for your Python job.
+
+{% highlight python %}
+# Set jar urls in "pipeline.jars". The jars will be uploaded to the cluster.
+# NOTE: Only local file urls (start with "file://") are supported.
+table_env.get_config.set_configuration("pipeline.jars", 
"file:///my/jar/path/connector.jar;file:///my/jar/path/udf.jar")
+
+# Set jar urls in "pipeline.classpaths". The jars will be added to the 
classpath of the cluster.

Review comment:
   `Set jar urls in "pipeline.classpaths"` ->  `Specify a list of jar URLs 
via "pipeline.classpaths"`

##
File path: docs/dev/table/python/dependency_management.md
##
@@ -22,7 +22,24 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-If third-party dependencies are used, you can specify the dependencies with 
the following Python Table APIs or through command line arguments directly when submitting the 
job.
+# Java Dependency Management
+
+If third-party Java dependencies are used, you can using following code to add 
jars for your Python job.
+
+{% highlight python %}
+# Set jar urls in "pipeline.jars". The jars will be uploaded to the cluster.
+# NOTE: Only local file urls (start with "file://") are supported.

Review comment:
   `Set jar urls in "pipeline.jars".` -> `Specify a list of jar URLs via 
"pipeline.jars"`

##
File path: docs/dev/table/python/dependency_management.md
##
@@ -22,7 +22,24 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-If third-party dependencies are used, you can specify the dependencies with 
the following Python Table APIs or through command line arguments directly when submitting the 
job.
+# Java Dependency Management
+
+If third-party Java dependencies are used, you can using following code to add 
jars for your Python job.
+
+{% highlight python %}
+# Set jar urls in "pipeline.jars". The jars will be uploaded to the cluster.
+# NOTE: Only local file urls (start with "file://") 

[GitHub] [flink] lamber-ken commented on a change in pull request #10228: [FLINK-14816] Add thread dump feature for taskmanager

2020-04-20 Thread GitBox


lamber-ken commented on a change in pull request #10228:
URL: https://github.com/apache/flink/pull/10228#discussion_r411835757



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/taskmanager/TaskManagerThreadDumpFileHeaders.java
##
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.messages.taskmanager;
+
+import org.apache.flink.runtime.rest.HttpMethodWrapper;
+import 
org.apache.flink.runtime.rest.handler.taskmanager.TaskManagerThreadDumpFileHandler;
+import org.apache.flink.runtime.rest.messages.EmptyRequestBody;
+import org.apache.flink.runtime.rest.messages.UntypedResponseMessageHeaders;
+
+/**
+ * Headers for the {@link TaskManagerThreadDumpFileHandler}.
+ */
+public class TaskManagerThreadDumpFileHeaders implements 
UntypedResponseMessageHeaders {
+
+   private static final TaskManagerThreadDumpFileHeaders INSTANCE = new 
TaskManagerThreadDumpFileHeaders();
+
+   private static final String URL = 
String.format("/taskmanagers/:%s/dump", TaskManagerIdPathParameter.KEY);

Review comment:
   > 
   
   Done.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lamber-ken commented on a change in pull request #10228: [FLINK-14816] Add thread dump feature for taskmanager

2020-04-20 Thread GitBox


lamber-ken commented on a change in pull request #10228:
URL: https://github.com/apache/flink/pull/10228#discussion_r411835682



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/FileType.java
##
@@ -22,6 +22,18 @@
  * Different file types to request from the {@link TaskExecutor}.
  */
 public enum FileType {
+   /**
+* the log file type for taskmanager
+*/
LOG,
-   STDOUT
+
+   /**
+* the stdout file type for taskmanager
+*/
+   STDOUT,
+
+   /**
+* the thread dump type for taskmanager
+*/
+   THREAD_DUMP

Review comment:
   IMO, I think it's best to keep it as it is.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lamber-ken commented on issue #10228: [FLINK-14816] Add thread dump feature for taskmanager

2020-04-20 Thread GitBox


lamber-ken commented on issue #10228:
URL: https://github.com/apache/flink/pull/10228#issuecomment-616924812


   Hi @tillrohrmann @vthinkxie , please review again, thanks
   
   
![image](https://user-images.githubusercontent.com/20113411/79821055-53bbd680-83c0-11ea-9c4d-467f5451ef6a.png)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on issue #11836: [FLINK-17188][python] Use pip instead of conda to install flake8 and sphinx

2020-04-20 Thread GitBox


flinkbot commented on issue #11836:
URL: https://github.com/apache/flink/pull/11836#issuecomment-616922166


   
   ## CI report:
   
   * 6df508f9c18e15b1a4c15a691c5e71500d32262d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11835: [hotfix][table sql planner/table sql legacy planner]fix icu license in NOTICE file.

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11835:
URL: https://github.com/apache/flink/pull/11835#issuecomment-616914851


   
   ## CI report:
   
   * 3f4e611749de42b1c7b18a7a421879e2a55379d1 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161161009) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7812)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11797: [FLINK-17169][table-blink] Refactor BaseRow to use RowKind instead of byte header

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11797:
URL: https://github.com/apache/flink/pull/11797#issuecomment-615294694


   
   ## CI report:
   
   * 85f40e3041783b1dbda1eb3b812f23e77936f7b3 UNKNOWN
   * c0080483a48619667cbc5c64edd232ae88db0046 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161081219) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7803)
 
   * 5b8d39bef382e260dbec301105c32dde88153245 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161160982) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7811)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 9c84a49b5dfcafda82951da969e169cbe7b15645 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161084792) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7804)
 
   * f20368de0bdc14ae26ea92256137a9551bdb0879 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] WeiZhong94 commented on issue #11785: [FLINK-17206][table] refactor function catalog to support delayed UDF initialization.

2020-04-20 Thread GitBox


WeiZhong94 commented on issue #11785:
URL: https://github.com/apache/flink/pull/11785#issuecomment-616920901


   @dawidwys Thanks for your review again! I have updated this PR according to 
your comment.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache commented on issue #11823: [FLINK-16412][hive] Disallow embedded metastore in HiveCatalog produc…

2020-04-20 Thread GitBox


lirui-apache commented on issue #11823:
URL: https://github.com/apache/flink/pull/11823#issuecomment-616918761


   I updated `DependencyTest` so that it doesn't call 
`HiveCatalogFactory::createCatalog` which has already been covered by 
`HiveCatalogFactoryTest`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 commented on a change in pull request #11568: [FLINK-16779][table] Add RAW type support in DDL and functions

2020-04-20 Thread GitBox


danny0405 commented on a change in pull request #11568:
URL: https://github.com/apache/flink/pull/11568#discussion_r411824095



##
File path: flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd
##
@@ -445,9 +445,10 @@
   # Return type of method implementation should be "SqlTypeNameSpec".
   # Example: SqlParseTimeStampZ().
   dataTypeParserMethods: [
-"ExtendedSqlBasicTypeName()"
-"CustomizedCollectionsTypeName()"
-"SqlMapTypeName()"
+"ExtendedSqlBasicTypeName()",
+"CustomizedCollectionsTypeName()",
+"SqlMapTypeName()",
+"SqlRawTypeName()",
 "ExtendedSqlRowTypeName()"

Review comment:
   Remove the tailing comma ","

##
File path: 
flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/TestRelDataTypeFactory.java
##
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.sql.parser;
+
+import org.apache.flink.table.calcite.ExtendedRelTypeFactory;
+
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.rel.type.RelDataTypeImpl;
+import org.apache.calcite.rel.type.RelDataTypeSystem;
+import org.apache.calcite.sql.type.SqlTypeFactoryImpl;
+
+/**
+ * {@link RelDataTypeFactory} for testing purposes.
+ */
+public final class TestRelDataTypeFactory extends SqlTypeFactoryImpl 
implements ExtendedRelTypeFactory {
+
+   TestRelDataTypeFactory(RelDataTypeSystem typeSystem) {
+   super(typeSystem);
+   }
+
+   @Override
+   public RelDataType createRawType(String className, String 
serializerString) {
+   return new DummyRawType(className, serializerString);
+   }

Review comment:
   The type should be canonized. So that we can compare it with object 
references.

##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/calcite/FlinkTypeFactory.scala
##
@@ -487,6 +496,9 @@ object FlinkTypeFactory {
   // CURSOR for UDTF case, whose type info will never be used, just a 
placeholder
   case CURSOR => new TypeInformationRawType[Nothing](new NothingTypeInfo)
 
+  case OTHER =>
+relDataType.asInstanceOf[RawRelDataType].getRawType
+

Review comment:
   Better add an `instanceOf` protection here.

##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/stream/table/validation/TableSinkValidationTest.scala
##
@@ -91,8 +91,8 @@ class TableSinkValidationTest extends TableTestBase {
 expectedException.expectMessage(
   "Field types of query result and registered TableSink default_catalog." +
   "default_database.testSink do not match.\n" +
-  "Query schema: [a: INT, b: BIGINT, c: STRING, d: BIGINT]\n" +
-  "Sink schema: [a: INT, b: BIGINT, c: STRING, d: INT]")
+  "Query schema: [a: INT, b: BIGINT, c: VARCHAR(2147483647), d: BIGINT]\n" 
+
+  "Sink schema: [a: INT, b: BIGINT, c: VARCHAR(2147483647), d: INT]")

Review comment:
   +1, why this change ~





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17135) PythonCalcSplitRuleTest.testPandasFunctionMixedWithGeneralPythonFunction failed

2020-04-20 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-17135.
---
Resolution: Fixed

> PythonCalcSplitRuleTest.testPandasFunctionMixedWithGeneralPythonFunction 
> failed
> ---
>
> Key: FLINK-17135
> URL: https://issues.apache.org/jira/browse/FLINK-17135
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Shared by Chesnay on 
> https://issues.apache.org/jira/browse/FLINK-17093?focusedCommentId=17083055=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17083055:
> PythonCalcSplitRuleTest.testPandasFunctionMixedWithGeneralPythonFunction 
> failed on master:
> {code:java}
>  [INFO] Running 
> org.apache.flink.table.planner.plan.rules.logical.PythonCalcSplitRuleTest
>  [ERROR] Tests run: 19, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 0.441 s <<< FAILURE! - in 
> org.apache.flink.table.planner.plan.rules.logical.PythonCalcSplitRuleTest
>  [ERROR] 
> testPandasFunctionMixedWithGeneralPythonFunction(org.apache.flink.table.planner.plan.rules.logical.PythonCalcSplitRuleTest)
>  Time elapsed: 0.032 s <<< FAILURE!
>  java.lang.AssertionError: 
>  type mismatch:
>  type1:
>  INTEGER NOT NULL
>  type2:
>  INTEGER NOT NULL
>  at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31)
>  at org.apache.calcite.plan.RelOptUtil.eq(RelOptUtil.java:2188)
>  at 
> org.apache.calcite.rex.RexProgramBuilder$RegisterInputShuttle.visitInputRef(RexProgramBuilder.java:948)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17135) PythonCalcSplitRuleTest.testPandasFunctionMixedWithGeneralPythonFunction failed

2020-04-20 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088230#comment-17088230
 ] 

Hequn Cheng commented on FLINK-17135:
-

Hi, the fix has been merged into master via 
34105c708b518f1fc5cc83f62bf10143ff662d13
Sorry for the trouble.

> PythonCalcSplitRuleTest.testPandasFunctionMixedWithGeneralPythonFunction 
> failed
> ---
>
> Key: FLINK-17135
> URL: https://issues.apache.org/jira/browse/FLINK-17135
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Shared by Chesnay on 
> https://issues.apache.org/jira/browse/FLINK-17093?focusedCommentId=17083055=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17083055:
> PythonCalcSplitRuleTest.testPandasFunctionMixedWithGeneralPythonFunction 
> failed on master:
> {code:java}
>  [INFO] Running 
> org.apache.flink.table.planner.plan.rules.logical.PythonCalcSplitRuleTest
>  [ERROR] Tests run: 19, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 0.441 s <<< FAILURE! - in 
> org.apache.flink.table.planner.plan.rules.logical.PythonCalcSplitRuleTest
>  [ERROR] 
> testPandasFunctionMixedWithGeneralPythonFunction(org.apache.flink.table.planner.plan.rules.logical.PythonCalcSplitRuleTest)
>  Time elapsed: 0.032 s <<< FAILURE!
>  java.lang.AssertionError: 
>  type mismatch:
>  type1:
>  INTEGER NOT NULL
>  type2:
>  INTEGER NOT NULL
>  at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31)
>  at org.apache.calcite.plan.RelOptUtil.eq(RelOptUtil.java:2188)
>  at 
> org.apache.calcite.rex.RexProgramBuilder$RegisterInputShuttle.visitInputRef(RexProgramBuilder.java:948)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11835: [hotfix][table sql planner/table sql legacy planner]fix icu license in NOTICE file.

2020-04-20 Thread GitBox


flinkbot commented on issue #11835:
URL: https://github.com/apache/flink/pull/11835#issuecomment-616914851


   
   ## CI report:
   
   * 3f4e611749de42b1c7b18a7a421879e2a55379d1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hequn8128 commented on issue #11771: [FLINK-17135][python][tests] Fix the test testPandasFunctionMixedWithGeneralPythonFunction to make it more stable

2020-04-20 Thread GitBox


hequn8128 commented on issue #11771:
URL: https://github.com/apache/flink/pull/11771#issuecomment-616914863


   @dawidwys @danny0405 Thanks a lot for your double-check on this. 
   
   Merging...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11797: [FLINK-17169][table-blink] Refactor BaseRow to use RowKind instead of byte header

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11797:
URL: https://github.com/apache/flink/pull/11797#issuecomment-615294694


   
   ## CI report:
   
   * 85f40e3041783b1dbda1eb3b812f23e77936f7b3 UNKNOWN
   * c0080483a48619667cbc5c64edd232ae88db0046 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161081219) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7803)
 
   * 5b8d39bef382e260dbec301105c32dde88153245 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 commented on issue #11771: [FLINK-17135][python][tests] Fix the test testPandasFunctionMixedWithGeneralPythonFunction to make it more stable

2020-04-20 Thread GitBox


danny0405 commented on issue #11771:
URL: https://github.com/apache/flink/pull/11771#issuecomment-616914403


   Thanks, i think it is ready for merge.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on issue #11836: [FLINK-17188][python] Use pip instead of conda to install flake8 and sphinx

2020-04-20 Thread GitBox


flinkbot commented on issue #11836:
URL: https://github.com/apache/flink/pull/11836#issuecomment-616913599


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6df508f9c18e15b1a4c15a691c5e71500d32262d (Tue Apr 21 
02:30:10 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17188) Failed to download conda when running python tests

2020-04-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17188:
---
Labels: pull-request-available test-stability  (was: test-stability)

> Failed to download conda when running python tests
> --
>
> Key: FLINK-17188
> URL: https://issues.apache.org/jira/browse/FLINK-17188
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Build System / Azure Pipelines
>Affects Versions: 1.11.0
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7549=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=14487301-07d2-5d56-5690-6dfab9ffd4d9
> This pipeline failed to download conda
> If this issue starts appearing more often we should come up with some 
> solution for those kinds of problems.
> {code}
> CondaHTTPError: HTTP 000 CONNECTION FAILED for url 
> 
> Elapsed: -
> An HTTP error occurred when trying to retrieve this URL.
> HTTP errors are often intermittent, and a simple retry will get you on your 
> way.
> conda install sphinx failed please try to exec the script again.  
>   if failed many times, you can try to exec in the form of sudo 
> ./lint-python.sh -f
> PYTHON exited with EXIT CODE: 1.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo opened a new pull request #11836: [FLINK-17188][python] Use pip instead of conda to install flake8 and sphinx

2020-04-20 Thread GitBox


HuangXingBo opened a new pull request #11836:
URL: https://github.com/apache/flink/pull/11836


   ## What is the purpose of the change
   
   *This pull request will use pip instead of conda to install flake8 and 
sphinx. We found our Azure machine always can't connect to anaconda channel, 
but we can use pip to install large packages such as apache-beam, pyarrow, 
numpy and so on. So I choose to use pip to install flake8 and sphinx*
   
   
   ## Brief change log
   
 - *Use pip to install flake8 and sphinx in lint-python.sh*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on issue #11771: [FLINK-17135][python][tests] Fix the test testPandasFunctionMixedWithGeneralPythonFunction to make it more stable

2020-04-20 Thread GitBox


dianfu commented on issue #11771:
URL: https://github.com/apache/flink/pull/11771#issuecomment-616912878


   @danny0405 Thanks a lot for your comments. Personally I think there is no 
need to revert this change even after the issue is fixed in calcite. The 
changed test case is to test the functionality of PythonCalcSplitRule and it's 
only updated a bit to work around the calcite bug, however, the test case 
itself is still a valid test case for its test purpose.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI

2020-04-20 Thread GitBox


leonardBang commented on a change in pull request #11334:
URL: https://github.com/apache/flink/pull/11334#discussion_r411265633



##
File path: flink-table/flink-sql-client/src/main/resources/META-INF/NOTICE
##
@@ -9,3 +9,4 @@ See bundled license files for details.
 
 - org.jline:jline-terminal:3.9.0
 - org.jline:jline-reader:3.9.0
+- com.ibm.icu:icu4j:65.1

Review comment:
   @aljoscha Thanks for your feedback, you're wright that icu is not a pure 
BSD license, I hesitate to import it at the beginning, but decided to add it 
after references `hive` 
https://github.com/apache/hive/blob/master/binary-package-licenses/com.ibm.icu.icu4j-LICENSE

##
File path: flink-table/flink-sql-client/src/main/resources/META-INF/NOTICE
##
@@ -9,3 +9,4 @@ See bundled license files for details.
 
 - org.jline:jline-terminal:3.9.0
 - org.jline:jline-reader:3.9.0
+- com.ibm.icu:icu4j:65.1

Review comment:
   And `camel` using it too 
,https://github.com/apache/camel/commit/83ae6d4e30387894b61c0799251e429a81b39435





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on issue #11835: [hostfix][table sql planner/table sql legacy planner]fix icu license in NOTICE file.

2020-04-20 Thread GitBox


flinkbot commented on issue #11835:
URL: https://github.com/apache/flink/pull/11835#issuecomment-616908047


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3f4e611749de42b1c7b18a7a421879e2a55379d1 (Tue Apr 21 
02:11:35 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17263) Remove RepeatFamilyOperandTypeChecker in blink planner and replace it with calcite's CompositeOperandTypeChecker

2020-04-20 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-17263.
--
Fix Version/s: 1.11.0
 Assignee: Terry Wang
   Resolution: Fixed

master: 43876ff26aa7a29b5b1326e92cc755074ae3751e

> Remove RepeatFamilyOperandTypeChecker in blink planner and replace it  with 
> calcite's CompositeOperandTypeChecker
> -
>
> Key: FLINK-17263
> URL: https://issues.apache.org/jira/browse/FLINK-17263
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Terry Wang
>Assignee: Terry Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Remove RepeatFamilyOperandTypeChecker in blink planner and replace it  with 
> calcite's CompositeOperandTypeChecker.
> It seems that what CompositeOperandTypeChecker can do is a super set of 
> RepeatFamilyOperandTypeChecker. To keep code easy to read, it's better to do 
> such refactor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] leonardBang opened a new pull request #11835: [hostfix][table sql planner/table sql legacy planner]fix icu license in NOTICE file.

2020-04-20 Thread GitBox


leonardBang opened a new pull request #11835:
URL: https://github.com/apache/flink/pull/11835


   
   
   
   ## What is the purpose of the change
   
   *This pull request fix the icu license, move the icu being under the ICU 
license block in notice file.*
   
   
   ## Brief change log
   
 - *update file 
flink/flink-table/flink-table-planner-blink/src/main/resources/META-INF/NOTICE*
 - *update file 
flink/flink-table/flink-table-planner/src/main/resources/META-INF/NOTICE*
   
   
   ## Verifying this change
   
   This change is a hotfix for docs without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KurtYoung commented on a change in pull request #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI

2020-04-20 Thread GitBox


KurtYoung commented on a change in pull request #11334:
URL: https://github.com/apache/flink/pull/11334#discussion_r411811681



##
File path: flink-table/flink-sql-client/src/main/resources/META-INF/NOTICE
##
@@ -9,3 +9,4 @@ See bundled license files for details.
 
 - org.jline:jline-terminal:3.9.0
 - org.jline:jline-reader:3.9.0
+- com.ibm.icu:icu4j:65.1

Review comment:
   I see what you mean, thanks





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-16103) Translate "Configuration" page of "Table API & SQL" into Chinese

2020-04-20 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16103:
---

Assignee: Delin Zhao

> Translate "Configuration" page of "Table API & SQL" into Chinese
> 
>
> Key: FLINK-16103
> URL: https://issues.apache.org/jira/browse/FLINK-16103
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Delin Zhao
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/config.html
> The markdown file is located in {{flink/docs/dev/table/config.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16103) Translate "Configuration" page of "Table API & SQL" into Chinese

2020-04-20 Thread Delin Zhao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088203#comment-17088203
 ] 

Delin Zhao commented on FLINK-16103:


You are right. I want to get it and complete it with free time in this week. 
Thanks.

> Translate "Configuration" page of "Table API & SQL" into Chinese
> 
>
> Key: FLINK-16103
> URL: https://issues.apache.org/jira/browse/FLINK-16103
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/config.html
> The markdown file is located in {{flink/docs/dev/table/config.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11834: [FLINK-17237][docs] Add Intro to DataStream API tutorial

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11834:
URL: https://github.com/apache/flink/pull/11834#issuecomment-616733646


   
   ## CI report:
   
   * 5f931d2730361d3d59801041010509e9945627c0 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161101219) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7807)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11826: [FLINK-17236][docs] Add Tutorials section overview

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11826:
URL: https://github.com/apache/flink/pull/11826#issuecomment-616548100


   
   ## CI report:
   
   * 99f48626c81334d4f942d19e5c9efd2098a8302e Travis: 
[CANCELED](https://travis-ci.com/github/flink-ci/flink/builds/161120771) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7808)
 
   * ce468f71367f3a09d6093afd65da4f1444517376 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #10228: [FLINK-14816] Add thread dump feature for taskmanager

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #10228:
URL: https://github.com/apache/flink/pull/10228#issuecomment-554599522


   
   ## CI report:
   
   * 45a8d72869eee0e3eafdc007280725e8043f2521 UNKNOWN
   * c0a3c6e519d1793383794c797064039fa66b90d2 UNKNOWN
   * 69b2f84d8f611aaa55a6d366be0c0abd11ef8d73 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161097781) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7806)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Issue Comment Deleted] (FLINK-16099) Translate "HiveCatalog" page of "Hive Integration" into Chinese

2020-04-20 Thread zhangzhanhua (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhanhua updated FLINK-16099:
-
Comment: was deleted

(was: I can help with the translation)

> Translate "HiveCatalog" page of "Hive Integration" into Chinese 
> 
>
> Key: FLINK-16099
> URL: https://issues.apache.org/jira/browse/FLINK-16099
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/hive/hive_catalog.html
> The markdown file is located in 
> {{flink/docs/dev/table/hive/hive_catalog.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11834: [FLINK-17237][docs] Add Intro to DataStream API tutorial

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11834:
URL: https://github.com/apache/flink/pull/11834#issuecomment-616733646


   
   ## CI report:
   
   * 5f931d2730361d3d59801041010509e9945627c0 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161101219) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7807)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #10228: [FLINK-14816] Add thread dump feature for taskmanager

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #10228:
URL: https://github.com/apache/flink/pull/10228#issuecomment-554599522


   
   ## CI report:
   
   * 45a8d72869eee0e3eafdc007280725e8043f2521 UNKNOWN
   * c0a3c6e519d1793383794c797064039fa66b90d2 UNKNOWN
   * 69b2f84d8f611aaa55a6d366be0c0abd11ef8d73 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161097781) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7806)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on issue #11797: [FLINK-17169][table-blink] Refactor BaseRow to use RowKind instead of byte header

2020-04-20 Thread GitBox


flinkbot edited a comment on issue #11797:
URL: https://github.com/apache/flink/pull/11797#issuecomment-615294694


   
   ## CI report:
   
   * 85f40e3041783b1dbda1eb3b812f23e77936f7b3 UNKNOWN
   * c0080483a48619667cbc5c64edd232ae88db0046 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161081219) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7803)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   7   8   >