[GitHub] [flink] flinkbot edited a comment on pull request #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11837:
URL: https://github.com/apache/flink/pull/11837#issuecomment-616956896


   
   ## CI report:
   
   * 9a4391b7c7248cdca1d2536f402891a6b64335b7 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161760513) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=164)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11892: [FLINK-17112][table] Support DESCRIBE statement in Flink SQL

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11892:
URL: https://github.com/apache/flink/pull/11892#issuecomment-618766534


   
   ## CI report:
   
   * 84b468105ad7674897ed1df0f126d5f078944225 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161749793) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=155)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11897: [FLINK-16104] Translate "Streaming Aggregation" page of "Table API & SQL" into Chinese

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11897:
URL: https://github.com/apache/flink/pull/11897#issuecomment-618812841


   
   ## CI report:
   
   * 7969f07cca4581f4ae7cbef7bd06787b3e90d248 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161766709) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=168)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11893: [FLINK-17344] Fix unstability of RecordWriterTest. testIdleTime and TaskMailboxProcessorTest.testIdleTime

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11893:
URL: https://github.com/apache/flink/pull/11893#issuecomment-618771008


   
   ## CI report:
   
   * b6a842dcaf10cca2b72a0155c1be580c5d9a3448 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161751204) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=156)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 commented on a change in pull request #11791: [FLINK-17210][sql-parser][hive] Implement database DDLs for Hive dialect

2020-04-23 Thread GitBox


danny0405 commented on a change in pull request #11791:
URL: https://github.com/apache/flink/pull/11791#discussion_r414314520



##
File path: flink-table/flink-sql-parser/pom.xml
##
@@ -298,6 +348,23 @@ under the License.

${project.build.directory}/generated-sources/


+   
+   generate-sources
+   javacc-hive
+   
+   javacc
+   
+   
+   
${project.build.directory}/generated-sources-hive/

Review comment:
   The plugin is a mess, can we just create another module instead put the 
code in one ? And why we copy the parser file of Flink, it seems there is no 
any reuse parse code block.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 commented on a change in pull request #11791: [FLINK-17210][sql-parser][hive] Implement database DDLs for Hive dialect

2020-04-23 Thread GitBox


danny0405 commented on a change in pull request #11791:
URL: https://github.com/apache/flink/pull/11791#discussion_r414313942



##
File path: 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/hive/HiveDDLUtils.java
##
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.sql.parser.ddl.hive;
+
+import org.apache.flink.sql.parser.ddl.SqlTableOption;
+import org.apache.flink.sql.parser.impl.ParseException;
+import org.apache.flink.table.catalog.config.CatalogConfig;
+
+import org.apache.calcite.sql.SqlLiteral;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.sql.SqlNodeList;
+import org.apache.calcite.sql.parser.SqlParserPos;
+
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.Set;
+
+import static 
org.apache.flink.sql.parser.ddl.hive.SqlAlterHiveDatabase.ALTER_DATABASE_OP;
+import static 
org.apache.flink.sql.parser.ddl.hive.SqlCreateHiveDatabase.DATABASE_LOCATION_URI;
+
+/**
+ * Util methods for Hive DDL Sql nodes.
+ */
+public class HiveDDLUtils {
+
+   private static final Set RESERVED_DB_PROPERTIES = new 
HashSet<>();
+

Review comment:
   It seems hacky we put these properties internal through the table 
options, i think these properties should be kept in each SqlNode but not the 
property list.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] GJL commented on a change in pull request #11857: [FLINK-17180][runtime] Implement SchedulingPipelinedRegion interface

2020-04-23 Thread GitBox


GJL commented on a change in pull request #11857:
URL: https://github.com/apache/flink/pull/11857#discussion_r414313732



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/RestartPipelinedRegionFailoverStrategy.java
##
@@ -84,34 +77,8 @@ public RestartPipelinedRegionFailoverStrategy(
ResultPartitionAvailabilityChecker 
resultPartitionAvailabilityChecker) {
 
this.topology = checkNotNull(topology);
-   this.regions = Collections.newSetFromMap(new 
IdentityHashMap<>());
-   this.vertexToRegionMap = new HashMap<>();
this.resultPartitionAvailabilityChecker = new 
RegionFailoverResultPartitionAvailabilityChecker(
resultPartitionAvailabilityChecker);
-
-   // build regions based on the given topology
-   LOG.info("Start building failover regions.");
-   buildFailoverRegions();
-   }
-   // 

-   //  region building
-   // 

-
-   private void buildFailoverRegions() {
-   final Set> distinctRegions =
-   
PipelinedRegionComputeUtil.computePipelinedRegions(topology);
-
-   // creating all the failover regions and register them
-   for (Set regionVertices : 
distinctRegions) {
-   LOG.debug("Creating a failover region with {} 
vertices.", regionVertices.size());
-   final FailoverRegion failoverRegion = new 
FailoverRegion(regionVertices);
-   regions.add(failoverRegion);
-   for (SchedulingExecutionVertex vertex : regionVertices) 
{
-   vertexToRegionMap.put(vertex.getId(), 
failoverRegion);
-   }
-   }
-
-   LOG.info("Created {} failover regions.", regions.size());

Review comment:
   Ok makes sense. Changed to info.

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/RestartPipelinedRegionFailoverStrategy.java
##
@@ -84,34 +77,8 @@ public RestartPipelinedRegionFailoverStrategy(
ResultPartitionAvailabilityChecker 
resultPartitionAvailabilityChecker) {
 
this.topology = checkNotNull(topology);
-   this.regions = Collections.newSetFromMap(new 
IdentityHashMap<>());
-   this.vertexToRegionMap = new HashMap<>();
this.resultPartitionAvailabilityChecker = new 
RegionFailoverResultPartitionAvailabilityChecker(
resultPartitionAvailabilityChecker);
-
-   // build regions based on the given topology
-   LOG.info("Start building failover regions.");
-   buildFailoverRegions();
-   }
-   // 

-   //  region building
-   // 

-
-   private void buildFailoverRegions() {
-   final Set> distinctRegions =
-   
PipelinedRegionComputeUtil.computePipelinedRegions(topology);
-
-   // creating all the failover regions and register them
-   for (Set regionVertices : 
distinctRegions) {
-   LOG.debug("Creating a failover region with {} 
vertices.", regionVertices.size());
-   final FailoverRegion failoverRegion = new 
FailoverRegion(regionVertices);
-   regions.add(failoverRegion);
-   for (SchedulingExecutionVertex vertex : regionVertices) 
{
-   vertexToRegionMap.put(vertex.getId(), 
failoverRegion);
-   }
-   }
-
-   LOG.info("Created {} failover regions.", regions.size());

Review comment:
   Ok makes sense. Changed to INFO.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11854: [WIP] Introduce external resource framework

2020-04-23 Thread GitBox


KarmaGYZ commented on a change in pull request #11854:
URL: https://github.com/apache/flink/pull/11854#discussion_r414313708



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/externalresource/ExternalResourceUtilsTest.java
##
@@ -85,4 +88,20 @@ public void testConstructExternalResourceDriversFromConfig() 
throws Exception {
assertThat(externalResourceDrivers.size(), is(1));
assertTrue(externalResourceDrivers.get(resourceName) instanceof 
TestingExternalResourceDriver);
}
+
+   @Test
+   public void testGetExternalResourceInfoFromEnvironment() {

Review comment:
   Same as above, I'd like to change it to `testGetExternalResourceInfo`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11854: [WIP] Introduce external resource framework

2020-04-23 Thread GitBox


KarmaGYZ commented on a change in pull request #11854:
URL: https://github.com/apache/flink/pull/11854#discussion_r414312301



##
File path: 
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/runtime/SavepointEnvironment.java
##
@@ -198,6 +199,11 @@ public GlobalAggregateManager getGlobalAggregateManager() {
throw new UnsupportedOperationException(ERROR_MSG);
}
 
+   @Override
+   public Map getExternalResourceDrivers() 
{
+   return Collections.emptyMap();
+   }

Review comment:
   I did throw an exception but I found error in `SavepointWriterITCase `. 
It seems currently the 
`SavepointEnvironment` would still be used to construct a 
`StreamingRuntimeContext`. I'm not familiar with savepoint mechanism. But I 
guess it is an issue because the constructor of `SavepointEnvironment` already 
took a `RuntimeContext` instance. It does not make sense to contract 
`RuntimeContext` from it. Thus, I currently choose to return `emptyMap` here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11854: [WIP] Introduce external resource framework

2020-04-23 Thread GitBox


KarmaGYZ commented on a change in pull request #11854:
URL: https://github.com/apache/flink/pull/11854#discussion_r414312301



##
File path: 
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/runtime/SavepointEnvironment.java
##
@@ -198,6 +199,11 @@ public GlobalAggregateManager getGlobalAggregateManager() {
throw new UnsupportedOperationException(ERROR_MSG);
}
 
+   @Override
+   public Map getExternalResourceDrivers() 
{
+   return Collections.emptyMap();
+   }

Review comment:
   I did throw an exception but I found error in `SavepointWriterITCase `. 
It seems currently the 
`SavepointEnvironment` would still be used to construct a 
`StreamingRuntimeContext`. I'm not familiar with savepoint mechanism. But I 
guess it is an issue because the constructor of `SavepointEnvironment` already 
took a `RuntimeContext` instance. It does not make sense to construct 
`RuntimeContext` from it again. Thus, I currently choose to return `emptyMap` 
here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #11897: [FLINK-16104] Translate "Streaming Aggregation" page of "Table API & SQL" into Chinese

2020-04-23 Thread GitBox


flinkbot commented on pull request #11897:
URL: https://github.com/apache/flink/pull/11897#issuecomment-618812841


   
   ## CI report:
   
   * 7969f07cca4581f4ae7cbef7bd06787b3e90d248 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 commented on a change in pull request #11791: [FLINK-17210][sql-parser][hive] Implement database DDLs for Hive dialect

2020-04-23 Thread GitBox


danny0405 commented on a change in pull request #11791:
URL: https://github.com/apache/flink/pull/11791#discussion_r414308065



##
File path: flink-table/flink-sql-parser/src/main/codegen-hive/data/Parser.tdd
##
@@ -0,0 +1,547 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+{
+  # Generated parser implementation package and class name.
+  package: "org.apache.flink.sql.parser.impl",
+  class: "FlinkHiveSqlParserImpl",
+
+  # List of additional classes and packages to import.
+  # Example. "org.apache.calcite.sql.*", "java.util.List".
+  # Please keep the import classes in alphabetical order if new class is added.
+  imports: [
+"org.apache.flink.sql.parser.ddl.hive.HiveDDLUtils"
+"org.apache.flink.sql.parser.ddl.hive.SqlAlterHiveDatabaseLocation"
+"org.apache.flink.sql.parser.ddl.hive.SqlAlterHiveDatabaseOwner"
+"org.apache.flink.sql.parser.ddl.hive.SqlAlterHiveDatabaseProps"
+"org.apache.flink.sql.parser.ddl.hive.SqlCreateHiveDatabase"
+"org.apache.flink.sql.parser.ddl.SqlAlterDatabase"
+"org.apache.flink.sql.parser.ddl.SqlAlterTable"
+"org.apache.flink.sql.parser.ddl.SqlAlterTableProperties"
+"org.apache.flink.sql.parser.ddl.SqlAlterTableRename"
+"org.apache.flink.sql.parser.ddl.SqlCreateCatalog"
+"org.apache.flink.sql.parser.ddl.SqlCreateFunction"

Review comment:
   Remove the useless imports.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11893: [FLINK-17344] Fix unstability of RecordWriterTest. testIdleTime and TaskMailboxProcessorTest.testIdleTime

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11893:
URL: https://github.com/apache/flink/pull/11893#issuecomment-618771008


   
   ## CI report:
   
   * b6a842dcaf10cca2b72a0155c1be580c5d9a3448 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161751204) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=156)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11837:
URL: https://github.com/apache/flink/pull/11837#issuecomment-616956896


   
   ## CI report:
   
   * 9a4391b7c7248cdca1d2536f402891a6b64335b7 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161760513) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=164)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN
   * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN
   * 4f7399680818f5d29e917e17720e00900822a43d UNKNOWN
   * 12c398dd2dc5b6716a590490575914698573ab76 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161752886) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=157)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11896: [FLINK-14356] [single-value] Introduce "single-value" format to (de)serialize message to a single field

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11896:
URL: https://github.com/apache/flink/pull/11896#issuecomment-618808279


   
   ## CI report:
   
   * 6c5110c57efcb28693fa1bb15f59dbfc9850e58f Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161765118) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=167)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11838: [FLINK-16965] Convert Graphite reporter to plugin

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11838:
URL: https://github.com/apache/flink/pull/11838#issuecomment-616983944


   
   ## CI report:
   
   * cd4f2390481d8edcea676f07a6db2eafe0a4025c UNKNOWN
   * b961c6a03d184e9a091b1513cb82f7aaa3956cf3 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161749746) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=153)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11854: [WIP] Introduce external resource framework

2020-04-23 Thread GitBox


KarmaGYZ commented on a change in pull request #11854:
URL: https://github.com/apache/flink/pull/11854#discussion_r414305550



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/util/DistributedRuntimeUDFContext.java
##
@@ -41,10 +44,14 @@
 public class DistributedRuntimeUDFContext extends AbstractRuntimeUDFContext {
 
private final HashMap> 
broadcastVars = new HashMap>();
-   
+
+   private final Map> 
externalResources;
+
public DistributedRuntimeUDFContext(TaskInfo taskInfo, ClassLoader 
userCodeClassLoader, ExecutionConfig executionConfig,
-   
Map> cpTasks, Map> 
accumulators, MetricGroup metrics) {
+   
Map> cpTasks, Map> accumulators,
+   
MetricGroup metrics, Map> 
externalResources) {
super(taskInfo, userCodeClassLoader, executionConfig, 
accumulators, cpTasks, metrics);
+   this.externalResources = externalResources;

Review comment:
   I like the idea to have a builder/factory that takes an `Environment`. 
However, it seems the `RuntimeContext` should not couple with `Environment`. 
`RuntimeContext` is in Function level while `Environment` is TM level. Flink 
does not ensure that the TM always exists when trigger a Function.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 commented on a change in pull request #11791: [FLINK-17210][sql-parser][hive] Implement database DDLs for Hive dialect

2020-04-23 Thread GitBox


danny0405 commented on a change in pull request #11791:
URL: https://github.com/apache/flink/pull/11791#discussion_r414305379



##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java
##
@@ -1403,4 +1446,29 @@ public CatalogColumnStatistics 
getPartitionColumnStatistics(ObjectPath tablePath
}
}
 
+   private static boolean createObjectIsGeneric(Map 
properties) {
+   // When creating an object, a hive object needs explicitly have 
a key is_generic = false
+   // otherwise, this is a generic object if 1) the key is missing 
2) is_generic = true
+   // this is opposite to reading an object. See 
getObjectIsGeneric().
+   if (properties == null) {
+   return true;
+   }
+   boolean isGeneric;
+   if (!properties.containsKey(CatalogConfig.IS_GENERIC)) {
+   // must be a generic object
+   isGeneric = true;
+   properties.put(CatalogConfig.IS_GENERIC, 
String.valueOf(true));
+   } else {
+   isGeneric = 
Boolean.parseBoolean(properties.get(CatalogConfig.IS_GENERIC));
+   }
+   return isGeneric;
+   }
+
+   private static boolean getObjectIsGeneric(Map 
properties) {
+   // When retrieving an object, a generic object needs explicitly 
have a key is_generic = true

Review comment:
   How about name it `getObjectIsGenericDefaultFalse`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 commented on a change in pull request #11791: [FLINK-17210][sql-parser][hive] Implement database DDLs for Hive dialect

2020-04-23 Thread GitBox


danny0405 commented on a change in pull request #11791:
URL: https://github.com/apache/flink/pull/11791#discussion_r414269563



##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java
##
@@ -247,7 +253,10 @@ public CatalogDatabase getDatabase(String databaseName) 
throws DatabaseNotExistE
 
Map properties = hiveDatabase.getParameters();
 
-   properties.put(HiveCatalogConfig.DATABASE_LOCATION_URI, 
hiveDatabase.getLocationUri());
+   boolean isGeneric = getObjectIsGeneric(properties);
+   if (!isGeneric) {
+   
properties.put(SqlCreateHiveDatabase.DATABASE_LOCATION_URI, 
hiveDatabase.getLocationUri());
+   }

Review comment:
   We usually put the properties key in the connector validator instead of 
the SqlNode, i think.

##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java
##
@@ -1403,4 +1446,29 @@ public CatalogColumnStatistics 
getPartitionColumnStatistics(ObjectPath tablePath
}
}
 
+   private static boolean createObjectIsGeneric(Map 
properties) {
+   // When creating an object, a hive object needs explicitly have 
a key is_generic = false

Review comment:
   How about name it `getObjectIsGenericDefaultTrue`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #11897: [FLINK-16104] Translate "Streaming Aggregation" page of "Table API & SQL" into Chinese

2020-04-23 Thread GitBox


flinkbot commented on pull request #11897:
URL: https://github.com/apache/flink/pull/11897#issuecomment-618810155


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7969f07cca4581f4ae7cbef7bd06787b3e90d248 (Fri Apr 24 
05:31:01 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-17344) RecordWriterTest.testIdleTime possibly deadlocks on Travis

2020-04-23 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-17344:


Assignee: Wenlong Lyu  (was: Wenlong Li)

> RecordWriterTest.testIdleTime possibly deadlocks on Travis
> --
>
> Key: FLINK-17344
> URL: https://issues.apache.org/jira/browse/FLINK-17344
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Reporter: Andrey Zagrebin
>Assignee: Wenlong Lyu
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> [https://travis-ci.org/github/apache/flink/jobs/678193214]
>  The test was introduced in FLINK-16864.
>  It may be an instability as it passed 2 times (core and core-scala) and 
> failed in core-hadoop:
>  [https://travis-ci.org/github/apache/flink/builds/678193199]
> The PR, for which the test suite was running, is only about JVM memory args 
> for JM process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16104) Translate "Streaming Aggregation" page of "Table API & SQL" into Chinese

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16104:
---
Labels: pull-request-available  (was: )

> Translate "Streaming Aggregation" page of "Table API & SQL" into Chinese 
> -
>
> Key: FLINK-16104
> URL: https://issues.apache.org/jira/browse/FLINK-16104
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: ChaojianZhang
>Priority: Major
>  Labels: pull-request-available
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/tuning/streaming_aggregation_optimization.html
> The markdown file is located in 
> {{flink/docs/dev/table/tuning/streaming_aggregation_optimization.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17344) RecordWriterTest.testIdleTime possibly deadlocks on Travis

2020-04-23 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-17344:


Assignee: Wenlong Li

> RecordWriterTest.testIdleTime possibly deadlocks on Travis
> --
>
> Key: FLINK-17344
> URL: https://issues.apache.org/jira/browse/FLINK-17344
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Reporter: Andrey Zagrebin
>Assignee: Wenlong Li
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> [https://travis-ci.org/github/apache/flink/jobs/678193214]
>  The test was introduced in FLINK-16864.
>  It may be an instability as it passed 2 times (core and core-scala) and 
> failed in core-hadoop:
>  [https://travis-ci.org/github/apache/flink/builds/678193199]
> The PR, for which the test suite was running, is only about JVM memory args 
> for JM process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] chaojianok opened a new pull request #11897: [FLINK-16104] Translate "Streaming Aggregation" page of "Table API & SQL" into Chinese

2020-04-23 Thread GitBox


chaojianok opened a new pull request #11897:
URL: https://github.com/apache/flink/pull/11897


   [FLINK-16104] Translate "Streaming Aggregation" page of "Table API & SQL" 
into Chinese.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KurtYoung commented on a change in pull request #11874: [FLINK-16935][table-planner-blink] Enable or delete most of the ignored test cases in blink planner.

2020-04-23 Thread GitBox


KurtYoung commented on a change in pull request #11874:
URL: https://github.com/apache/flink/pull/11874#discussion_r414302741



##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/NonDeterministicTests.scala
##
@@ -23,71 +23,65 @@ import org.apache.flink.table.api.scala._
 import org.apache.flink.table.planner.expressions.utils.ExpressionTestBase
 import org.apache.flink.types.Row
 
-import org.junit.{Ignore, Test}
+import org.junit.Test
 
 /**
   * Tests that can only be checked manually as they are non-deterministic.
   */
 class NonDeterministicTests extends ExpressionTestBase {
 
-  @Ignore

Review comment:
   I don't think any developer will notice this file and really do the 
manually check. I would either delete this or change this testing purpose to 
verify all non-deterministic functions can successfully run





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11895: [FLINK-10911] Flink's flink-scala-shell is not working with Scala 2.12

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11895:
URL: https://github.com/apache/flink/pull/11895#issuecomment-618799215


   
   ## CI report:
   
   * 191da9e8a608bde3657193779bc9f0a48350f36d Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161760585) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=166)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=161)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11892: [FLINK-17112][table] Support DESCRIBE statement in Flink SQL

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11892:
URL: https://github.com/apache/flink/pull/11892#issuecomment-618766534


   
   ## CI report:
   
   * 84b468105ad7674897ed1df0f126d5f078944225 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161749793) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=155)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #11896: [FLINK-14356] [single-value] Introduce "single-value" format to (de)serialize message to a single field

2020-04-23 Thread GitBox


flinkbot commented on pull request #11896:
URL: https://github.com/apache/flink/pull/11896#issuecomment-618808279


   
   ## CI report:
   
   * 6c5110c57efcb28693fa1bb15f59dbfc9850e58f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN
   * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN
   * 4f7399680818f5d29e917e17720e00900822a43d UNKNOWN
   * 12c398dd2dc5b6716a590490575914698573ab76 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161752886) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=157)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11797: [FLINK-17169][table-blink] Refactor BaseRow to use RowKind instead of byte header

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11797:
URL: https://github.com/apache/flink/pull/11797#issuecomment-615294694


   
   ## CI report:
   
   * 85f40e3041783b1dbda1eb3b812f23e77936f7b3 UNKNOWN
   * b8122fd40266de44eea61cd65e0b61cecea8 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161749726) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=152)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11854: [WIP] Introduce external resource framework

2020-04-23 Thread GitBox


KarmaGYZ commented on a change in pull request #11854:
URL: https://github.com/apache/flink/pull/11854#discussion_r414300630



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/externalresource/ExternalResourceUtils.java
##
@@ -117,4 +138,17 @@
 
return externalResourceDrivers;
}
+
+   /**
+* Get the external resource information from environment. Index by the 
resourceName.
+*/
+   public static Map> 
getExternalResourceInfo(Map 
externalResourceDrivers, Configuration configuration) {

Review comment:
   I think it would be better to not expose all components to this function.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zjffdu commented on pull request #11895: [FLINK-10911] Flink's flink-scala-shell is not working with Scala 2.12

2020-04-23 Thread GitBox


zjffdu commented on pull request #11895:
URL: https://github.com/apache/flink/pull/11895#issuecomment-618806981


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #11896: [FLINK-14356] [single-value] Introduce "single-value" format to (de)serialize message to a single field

2020-04-23 Thread GitBox


flinkbot commented on pull request #11896:
URL: https://github.com/apache/flink/pull/11896#issuecomment-618801326


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6c5110c57efcb28693fa1bb15f59dbfc9850e58f (Fri Apr 24 
04:55:54 UTC 2020)
   
   **Warnings:**
* **2 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-14356) Introduce "single-field" format to (de)serialize message to a single field

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14356:
---
Labels: pull-request-available  (was: )

> Introduce "single-field" format to (de)serialize message to a single field
> --
>
> Key: FLINK-14356
> URL: https://issues.apache.org/jira/browse/FLINK-14356
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: jinfeng
>Assignee: jinfeng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> I want to use flink sql to write kafka messages directly to hdfs. The 
> serialization and deserialization of messages are not involved in the middle. 
>  The bytes of the message directly convert the first field of Row.  However, 
> the current RowSerializationSchema does not support the conversion of bytes 
> to VARBINARY. Can we add some special RowSerializationSchema and 
> RowDerializationSchema ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hackergin opened a new pull request #11896: [FLINK-14356] [single-value] Introduce "single-value" format to (de)serialize message to a single field

2020-04-23 Thread GitBox


hackergin opened a new pull request #11896:
URL: https://github.com/apache/flink/pull/11896


   
   
   ## What is the purpose of the change
   
   This pull request introduct single-value format to (de)serialize message  to 
a single field 
   
   
   ## Brief change log
   
   - add single-value format . Currently support bytes[], string, int, float, 
double,long,boolean
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11837:
URL: https://github.com/apache/flink/pull/11837#issuecomment-616956896


   
   ## CI report:
   
   * 38df357bc5c54d93159c2d460833cf80c42b5e0c Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161413766) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=66)
 
   * 9a4391b7c7248cdca1d2536f402891a6b64335b7 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161760513) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=164)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #11895: [FLINK-10911] Flink's flink-scala-shell is not working with Scala 2.12 [WIP]

2020-04-23 Thread GitBox


flinkbot commented on pull request #11895:
URL: https://github.com/apache/flink/pull/11895#issuecomment-618799215


   
   ## CI report:
   
   * 191da9e8a608bde3657193779bc9f0a48350f36d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11838: [FLINK-16965] Convert Graphite reporter to plugin

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11838:
URL: https://github.com/apache/flink/pull/11838#issuecomment-616983944


   
   ## CI report:
   
   * cd4f2390481d8edcea676f07a6db2eafe0a4025c UNKNOWN
   * b961c6a03d184e9a091b1513cb82f7aaa3956cf3 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161749746) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=153)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11853: [FLINK-15006][table-planner] Add option to shuffle-by-partition when dynamic inserting

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11853:
URL: https://github.com/apache/flink/pull/11853#issuecomment-617586421


   
   ## CI report:
   
   * 99b58fba1e22bdbba0896f0aba181eafc6a28e7f Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161461206) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=81)
 
   * 258a96c64c409102c679382b054d17602df65953 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161760537) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=165)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11796: [FLINK-14258][table][filesystem] Integrate file system connector to streaming sink

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11796:
URL: https://github.com/apache/flink/pull/11796#issuecomment-615254717


   
   ## CI report:
   
   * 90fa0d286fbdae57d834d527a8843a8a493d5a0d Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/160733394) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7669)
 
   * 4d98d46528fab8087fccf79b9184f27a6197c714 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161760487) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=163)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17364) Support StreamingFileSink in PrestoS3FileSystem

2020-04-23 Thread Lu Niu (Jira)
Lu Niu created FLINK-17364:
--

 Summary: Support StreamingFileSink in PrestoS3FileSystem
 Key: FLINK-17364
 URL: https://issues.apache.org/jira/browse/FLINK-17364
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / FileSystem
Reporter: Lu Niu


For S3, currently the StreamingFileSink supports only the Hadoop-based 
FileSystem implementation, not the implementation based on Presto At the same 
time, presto is the recommended file system for checkpointing. implementing 
StreamingFileSink in PrestoS3FileSystem helps filling the gap, enables user to 
use PrestoS3FileSystem in all access to S3. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Ruan-Xin commented on pull request #11876: [FLINK-17334] HIVE UDF BUGFIX

2020-04-23 Thread GitBox


Ruan-Xin commented on pull request #11876:
URL: https://github.com/apache/flink/pull/11876#issuecomment-618795529


   > ObjectInspectorFactory.ObjectInspectorOptions.JAVA)
   
   I got it, You're right, I can get the correct result with this method. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11853: [FLINK-15006][table-planner] Add option to shuffle-by-partition when dynamic inserting

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11853:
URL: https://github.com/apache/flink/pull/11853#issuecomment-617586421


   
   ## CI report:
   
   * 99b58fba1e22bdbba0896f0aba181eafc6a28e7f Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161461206) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=81)
 
   * 258a96c64c409102c679382b054d17602df65953 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11837:
URL: https://github.com/apache/flink/pull/11837#issuecomment-616956896


   
   ## CI report:
   
   * 38df357bc5c54d93159c2d460833cf80c42b5e0c Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161413766) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=66)
 
   * 9a4391b7c7248cdca1d2536f402891a6b64335b7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11797: [FLINK-17169][table-blink] Refactor BaseRow to use RowKind instead of byte header

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11797:
URL: https://github.com/apache/flink/pull/11797#issuecomment-615294694


   
   ## CI report:
   
   * 85f40e3041783b1dbda1eb3b812f23e77936f7b3 UNKNOWN
   * b8122fd40266de44eea61cd65e0b61cecea8 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161749726) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=152)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11796: [FLINK-14258][table][filesystem] Integrate file system connector to streaming sink

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11796:
URL: https://github.com/apache/flink/pull/11796#issuecomment-615254717


   
   ## CI report:
   
   * 90fa0d286fbdae57d834d527a8843a8a493d5a0d Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/160733394) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7669)
 
   * 4d98d46528fab8087fccf79b9184f27a6197c714 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11774: [FLINK-17020][runtime] Introduce GlobalDataExchangeMode for JobGraph generation

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11774:
URL: https://github.com/apache/flink/pull/11774#issuecomment-614576955


   
   ## CI report:
   
   * 1f72fb850f69449f4ef886ec0cad8a0644bab93d Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161169787) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7820)
 
   * b1316cec224ccda73e1eed8226c0b5b61f2c6e21 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161758669) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=162)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17273) Fix not calling ResourceManager#closeTaskManagerConnection in KubernetesResourceManager in case of registered TaskExecutor failure

2020-04-23 Thread Canbin Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091156#comment-17091156
 ] 

Canbin Zheng commented on FLINK-17273:
--

Thanks a lot for the input [~trohrmann] [~xintongsong]. I agree that we need to 
revisit the boundary between {{ResourceManager}} and its deployment-specific 
implementations, especially for the worker lifecycle control flow; I will take 
a closer look at the overall architecture and get back to further discuss it 
with you.

> Fix not calling ResourceManager#closeTaskManagerConnection in 
> KubernetesResourceManager in case of registered TaskExecutor failure
> --
>
> Key: FLINK-17273
> URL: https://issues.apache.org/jira/browse/FLINK-17273
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
> Fix For: 1.11.0
>
>
> At the moment, the {{KubernetesResourceManager}} does not call the method of 
> {{ResourceManager#closeTaskManagerConnection}} once it detects that a 
> currently registered task executor has failed. This ticket propoeses to fix 
> this problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16662) Blink Planner failed to generate JobGraph for POJO DataStream converting to Table (Cannot determine simple type name)

2020-04-23 Thread chenxyz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091155#comment-17091155
 ] 

chenxyz commented on FLINK-16662:
-

[~jark] thx, you're right, end-to-end test is a better way for running in flink 
cluster and testing the whole lifecycle

> Blink Planner failed to generate JobGraph for POJO DataStream converting to 
> Table (Cannot determine simple type name)
> -
>
> Key: FLINK-16662
> URL: https://issues.apache.org/jira/browse/FLINK-16662
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: chenxyz
>Assignee: LionelZ
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>
> When using Blink Palnner to convert a POJO DataStream to a Table, Blink will 
> generate and compile the SourceConversion$1 code. If the Jar task is 
> submitted to Flink, since the UserCodeClassLoader is not used when generating 
> the JobGraph, the ClassLoader(AppClassLoader) of the compiled code cannot 
> load the POJO class in the Jar package, so the following error will be 
> reported:
>  
> {code:java}
> Caused by: org.codehaus.commons.compiler.CompileException: Line 27, Column 
> 174: Cannot determine simple type name "net"Caused by: 
> org.codehaus.commons.compiler.CompileException: Line 27, Column 174: Cannot 
> determine simple type name "net" at 
> org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:12124) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6746) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6507) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6486) at 
> org.codehaus.janino.UnitCompiler.access$13800(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$21$1.visitReferenceType(UnitCompiler.java:6394)
>  at 
> org.codehaus.janino.UnitCompiler$21$1.visitReferenceType(UnitCompiler.java:6389)
>  at org.codehaus.janino.Java$ReferenceType.accept(Java.java:3917) at 
> org.codehaus.janino.UnitCompiler$21.visitType(UnitCompiler.java:6389) at 
> org.codehaus.janino.UnitCompiler$21.visitType(UnitCompiler.java:6382) at 
> org.codehaus.janino.Java$ReferenceType.accept(Java.java:3916) at 
> org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6382) at 
> org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:7009) at 
> org.codehaus.janino.UnitCompiler.access$15200(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$21$2.visitCast(UnitCompiler.java:6425) at 
> org.codehaus.janino.UnitCompiler$21$2.visitCast(UnitCompiler.java:6403) at 
> org.codehaus.janino.Java$Cast.accept(Java.java:4887) at 
> org.codehaus.janino.UnitCompiler$21.visitRvalue(UnitCompiler.java:6403) at 
> org.codehaus.janino.UnitCompiler$21.visitRvalue(UnitCompiler.java:6382) at 
> org.codehaus.janino.Java$Rvalue.accept(Java.java:4105) at 
> org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6382) at 
> org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:9150)
>  at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:9036) at 
> org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:8938) at 
> org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5060) at 
> org.codehaus.janino.UnitCompiler.access$9100(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$16.visitMethodInvocation(UnitCompiler.java:4421)
>  at 
> org.codehaus.janino.UnitCompiler$16.visitMethodInvocation(UnitCompiler.java:4394)
>  at org.codehaus.janino.Java$MethodInvocation.accept(Java.java:5062) at 
> org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4394) at 
> org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:5575) at 
> org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5019) at 
> org.codehaus.janino.UnitCompiler.access$8600(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$16.visitCast(UnitCompiler.java:4416) at 
> 

[GitHub] [flink] xintongsong commented on a change in pull request #11323: [FLINK-16439][k8s] Make KubernetesResourceManager starts workers using WorkerResourceSpec requested by SlotManager

2020-04-23 Thread GitBox


xintongsong commented on a change in pull request #11323:
URL: https://github.com/apache/flink/pull/11323#discussion_r414277526



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesResourceManager.java
##
@@ -320,5 +333,16 @@ private void internalStopPod(String podName) {
}
}
);
+
+   final KubernetesWorkerNode kubernetesWorkerNode = 
workerNodes.remove(resourceId);
+   final WorkerResourceSpec workerResourceSpec = 
podWorkerResources.remove(podName);
+
+   // If the stopped pod is requested in the current attempt 
(workerResourceSpec is known) and is not yet added,
+   // we need to notify ActiveResourceManager to decrease the 
pending worker count.
+   if (workerResourceSpec != null && kubernetesWorkerNode == null) 
{

Review comment:
   With `requestKubernetesPodIfRequired` now tries to request pods for all 
the `workerResourceSpec`s, there's not much differences between in failure 
handling of started/recovered pods. The only difference is whether we decrease 
the `pendingWorkerCounter` or not.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #11895: [FLINK-10911] Flink's flink-scala-shell is not working with Scala 2.12 [WIP]

2020-04-23 Thread GitBox


flinkbot commented on pull request #11895:
URL: https://github.com/apache/flink/pull/11895#issuecomment-618789667


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 191da9e8a608bde3657193779bc9f0a48350f36d (Fri Apr 24 
04:05:31 UTC 2020)
   
   **Warnings:**
* **2 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-10911).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11774: [FLINK-17020][runtime] Introduce GlobalDataExchangeMode for JobGraph generation

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11774:
URL: https://github.com/apache/flink/pull/11774#issuecomment-614576955


   
   ## CI report:
   
   * 1f72fb850f69449f4ef886ec0cad8a0644bab93d Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161169787) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7820)
 
   * b1316cec224ccda73e1eed8226c0b5b61f2c6e21 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zjffdu opened a new pull request #11895: [FLINK-10911] Flink's flink-scala-shell is not working with Scala 2.12 [WIP]

2020-04-23 Thread GitBox


zjffdu opened a new pull request #11895:
URL: https://github.com/apache/flink/pull/11895


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache commented on pull request #11876: [FLINK-17334] HIVE UDF BUGFIX

2020-04-23 Thread GitBox


lirui-apache commented on pull request #11876:
URL: https://github.com/apache/flink/pull/11876#issuecomment-618788643


   @Ruan-Xin I meant you can change `HiveSimpleUDF::getHiveResultType` to 
something like this:
   ```
@Override
public DataType getHiveResultType(Object[] constantArguments, 
DataType[] argTypes) {
try {
List argTypeInfo = new ArrayList<>();
for (DataType argType : argTypes) {

argTypeInfo.add(HiveTypeUtil.toHiveTypeInfo(argType, false));
}
Method evalMethod = 
hiveFunctionWrapper.createFunction().getResolver().getEvalMethod(argTypeInfo);
   
return HiveTypeUtil.toFlinkType(

ObjectInspectorFactory.getReflectionObjectInspector(evalMethod.getGenericReturnType(),

ObjectInspectorFactory.ObjectInspectorOptions.JAVA));
} catch (UDFArgumentException e) {
throw new FlinkHiveUDFException(e);
}
}
   ```
   So you don't have to handle key/value types by yourself. I haven't done 
thorough tests for this change but seems it works for simple cases at least.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17363) Asynchronous association dimension table timeout

2020-04-23 Thread robert (Jira)
robert created FLINK-17363:
--

 Summary: Asynchronous association dimension table timeout
 Key: FLINK-17363
 URL: https://issues.apache.org/jira/browse/FLINK-17363
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.9.0
Reporter: robert


java.lang.Exception: An async function call terminated with an exception. 
Failing the AsyncWaitOperator.
 at 
org.apache.flink.streaming.api.operators.async.Emitter.output(Emitter.java:137)
 at org.apache.flink.streaming.api.operators.async.Emitter.run(Emitter.java:85)
 at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: 
java.util.concurrent.TimeoutException: Async function call has timed out.
 at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
 at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
 at 
org.apache.flink.streaming.api.operators.async.queue.StreamRecordQueueEntry.get(StreamRecordQueueEntry.java:68)
 at 
org.apache.flink.streaming.api.operators.async.Emitter.output(Emitter.java:129)
 ... 2 more
Caused by: java.util.concurrent.TimeoutException: Async function call has timed 
out.
 at 
org.apache.flink.streaming.api.functions.async.AsyncFunction.timeout(AsyncFunction.java:97)
 at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator$1.onProcessingTime(AsyncWaitOperator.java:215)
 at 
org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeService$TriggerTask.run(SystemProcessingTimeService.java:281)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 ... 1 more
2020-04-24 11:30:22,933 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Discarding 
checkpoint 492 of job 075a0b260dd779f28bc87aaa59d7222a.
java.lang.Exception: An async function call terminated with an exception. 
Failing the AsyncWaitOperator.
 at 
org.apache.flink.streaming.api.operators.async.Emitter.output(Emitter.java:137)
 at org.apache.flink.streaming.api.operators.async.Emitter.run(Emitter.java:85)
 at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: 
java.util.concurrent.TimeoutException: Async function call has timed out.
 at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
 at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
 at 
org.apache.flink.streaming.api.operators.async.queue.StreamRecordQueueEntry.get(StreamRecordQueueEntry.java:68)
 at 
org.apache.flink.streaming.api.operators.async.Emitter.output(Emitter.java:129)
 ... 2 more
Caused by: java.util.concurrent.TimeoutException: Async function call has timed 
out.
 at 
org.apache.flink.streaming.api.functions.async.AsyncFunction.timeout(AsyncFunction.java:97)
 at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator$1.onProcessingTime(AsyncWaitOperator.java:215)
 at 
org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeService$TriggerTask.run(SystemProcessingTimeService.java:281)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 ... 1 more
2020-04-24 11:30:22,933 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job sql_details 
(075a0b260dd779f28bc87aaa59d7222a) switched from state RUNNING to FAILING.
java.lang.Exception: An async function call terminated with an exception. 
Failing the AsyncWaitOperator.
 at 
org.apache.flink.streaming.api.operators.async.Emitter.output(Emitter.java:137)
 at org.apache.flink.streaming.api.operators.async.Emitter.run(Emitter.java:85)
 at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: 
java.util.concurrent.TimeoutException: Async function call has timed out.
 at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
 at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
 at 

[jira] [Commented] (FLINK-17308) ExecutionGraphCache cachedExecutionGraphs not cleanup cause OOM Bug

2020-04-23 Thread sunjincheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091144#comment-17091144
 ] 

sunjincheng commented on FLINK-17308:
-

Reset the fix version as 1.9.3 was released.

> ExecutionGraphCache cachedExecutionGraphs not cleanup cause OOM Bug
> ---
>
> Key: FLINK-17308
> URL: https://issues.apache.org/jira/browse/FLINK-17308
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.9.0, 1.9.1, 1.9.2, 1.10.0
>Reporter: yujunyong
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0, 1.9.4
>
>
> class org.apache.flink.runtime.rest.handler.legacy.ExecutionGraphCache will 
> cache job execution graph in field 
> "cachedExecutionGraphs" when call method 
> "getExecutionGraph", but never call it's 
> cleanup method in flink. it's cause JobManager Out of Memory, When submit a 
> lot of batch job and fetch these job's info. becasue these operation cache 
> all these job execution graph and "cleanup" method never called



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk commented on a change in pull request #11857: [FLINK-17180][runtime] Implement SchedulingPipelinedRegion interface

2020-04-23 Thread GitBox


zhuzhurk commented on a change in pull request #11857:
URL: https://github.com/apache/flink/pull/11857#discussion_r414264754



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/RestartPipelinedRegionFailoverStrategy.java
##
@@ -84,34 +77,8 @@ public RestartPipelinedRegionFailoverStrategy(
ResultPartitionAvailabilityChecker 
resultPartitionAvailabilityChecker) {
 
this.topology = checkNotNull(topology);
-   this.regions = Collections.newSetFromMap(new 
IdentityHashMap<>());
-   this.vertexToRegionMap = new HashMap<>();
this.resultPartitionAvailabilityChecker = new 
RegionFailoverResultPartitionAvailabilityChecker(
resultPartitionAvailabilityChecker);
-
-   // build regions based on the given topology
-   LOG.info("Start building failover regions.");
-   buildFailoverRegions();
-   }
-   // 

-   //  region building
-   // 

-
-   private void buildFailoverRegions() {
-   final Set> distinctRegions =
-   
PipelinedRegionComputeUtil.computePipelinedRegions(topology);
-
-   // creating all the failover regions and register them
-   for (Set regionVertices : 
distinctRegions) {
-   LOG.debug("Creating a failover region with {} 
vertices.", regionVertices.size());
-   final FailoverRegion failoverRegion = new 
FailoverRegion(regionVertices);
-   regions.add(failoverRegion);
-   for (SchedulingExecutionVertex vertex : regionVertices) 
{
-   vertexToRegionMap.put(vertex.getId(), 
failoverRegion);
-   }
-   }
-
-   LOG.info("Created {} failover regions.", regions.size());

Review comment:
   Previously it is INFO level. And I'd still prefer to make it INFO. 
Regions are basic components now but users can get little information on that 
at the moment. This log can help to verify if regions are generated as 
expected, especially there are multiple GlobalDataExchangeModes which can lead 
to different region building results.
   Also it is a short and one time log so it would not mess up the logs or 
cause any performance issue.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17308) ExecutionGraphCache cachedExecutionGraphs not cleanup cause OOM Bug

2020-04-23 Thread sunjincheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-17308:

Fix Version/s: (was: 1.9.3)
   1.9.4

> ExecutionGraphCache cachedExecutionGraphs not cleanup cause OOM Bug
> ---
>
> Key: FLINK-17308
> URL: https://issues.apache.org/jira/browse/FLINK-17308
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.9.0, 1.9.1, 1.9.2, 1.10.0
>Reporter: yujunyong
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0, 1.9.4
>
>
> class org.apache.flink.runtime.rest.handler.legacy.ExecutionGraphCache will 
> cache job execution graph in field 
> "cachedExecutionGraphs" when call method 
> "getExecutionGraph", but never call it's 
> cleanup method in flink. it's cause JobManager Out of Memory, When submit a 
> lot of batch job and fetch these job's info. becasue these operation cache 
> all these job execution graph and "cleanup" method never called



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #11894: [WIP][FLINK-10911] Flink's flink-scala-shell is not working with Scala 2.12

2020-04-23 Thread GitBox


flinkbot commented on pull request #11894:
URL: https://github.com/apache/flink/pull/11894#issuecomment-618785514


   
   ## CI report:
   
   * 191da9e8a608bde3657193779bc9f0a48350f36d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #11857: [FLINK-17180][runtime] Implement SchedulingPipelinedRegion interface

2020-04-23 Thread GitBox


zhuzhurk commented on a change in pull request #11857:
URL: https://github.com/apache/flink/pull/11857#discussion_r414264754



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/RestartPipelinedRegionFailoverStrategy.java
##
@@ -84,34 +77,8 @@ public RestartPipelinedRegionFailoverStrategy(
ResultPartitionAvailabilityChecker 
resultPartitionAvailabilityChecker) {
 
this.topology = checkNotNull(topology);
-   this.regions = Collections.newSetFromMap(new 
IdentityHashMap<>());
-   this.vertexToRegionMap = new HashMap<>();
this.resultPartitionAvailabilityChecker = new 
RegionFailoverResultPartitionAvailabilityChecker(
resultPartitionAvailabilityChecker);
-
-   // build regions based on the given topology
-   LOG.info("Start building failover regions.");
-   buildFailoverRegions();
-   }
-   // 

-   //  region building
-   // 

-
-   private void buildFailoverRegions() {
-   final Set> distinctRegions =
-   
PipelinedRegionComputeUtil.computePipelinedRegions(topology);
-
-   // creating all the failover regions and register them
-   for (Set regionVertices : 
distinctRegions) {
-   LOG.debug("Creating a failover region with {} 
vertices.", regionVertices.size());
-   final FailoverRegion failoverRegion = new 
FailoverRegion(regionVertices);
-   regions.add(failoverRegion);
-   for (SchedulingExecutionVertex vertex : regionVertices) 
{
-   vertexToRegionMap.put(vertex.getId(), 
failoverRegion);
-   }
-   }
-
-   LOG.info("Created {} failover regions.", regions.size());

Review comment:
   Previously it is INFO level. And I'd still prefer to make it INFO. 
Regions are basic components now but users can get little information on that 
at the moment. This log can help to verify if regions are generated as expected.
   Also it is a short and one time log so it would not mess up the logs or 
cause any performance issue.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11791: [FLINK-17210][sql-parser][hive] Implement database DDLs for Hive dialect

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11791:
URL: https://github.com/apache/flink/pull/11791#issuecomment-615162346


   
   ## CI report:
   
   * a6642cdfead306f1192f73c861e7201af9d38c56 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/160705250) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7651)
 
   * 6359340c0b5b01923d7ac2824b9eb1071e0f2f37 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161755234) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=158)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #11774: [FLINK-17020][runtime] Introduce GlobalDataExchangeMode for JobGraph generation

2020-04-23 Thread GitBox


zhuzhurk commented on a change in pull request #11774:
URL: https://github.com/apache/flink/pull/11774#discussion_r414269826



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java
##
@@ -898,6 +825,116 @@ public void 
testSlotSharingOnAllVerticesInSameSlotSharingGroupByDefaultDisabled(
assertDistinctSharingGroups(source1Vertex, source2Vertex, 
map2Vertex);
}
 
+   @Test
+   public void testDefaultGlobalDataExchangeModeIsAllEdgesPipelined() {
+   final StreamGraph streamGraph = 
createStreamGraphForGlobalDataExchangeModeTests();
+   assertThat(streamGraph.getGlobalDataExchangeMode(), 
is(GlobalDataExchangeMode.ALL_EDGES_PIPELINED));
+   }
+
+   @Test
+   public void testAllEdgesBlockingMode() {
+   final StreamGraph streamGraph = 
createStreamGraphForGlobalDataExchangeModeTests();
+   
streamGraph.setGlobalDataExchangeMode(GlobalDataExchangeMode.ALL_EDGES_BLOCKING);
+   final JobGraph jobGraph = 
StreamingJobGraphGenerator.createJobGraph(streamGraph);
+
+   final List verticesSorted = 
jobGraph.getVerticesSortedTopologicallyFromSources();
+   final JobVertex sourceVertex = verticesSorted.get(0);
+   final JobVertex map1Vertex = verticesSorted.get(1);
+   final JobVertex map2Vertex = verticesSorted.get(2);
+
+   assertEquals(ResultPartitionType.BLOCKING, 
sourceVertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.BLOCKING, 
map1Vertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.BLOCKING, 
map2Vertex.getProducedDataSets().get(0).getResultType());
+   }
+
+   @Test
+   public void testAllEdgesPipelinedMode() {
+   final StreamGraph streamGraph = 
createStreamGraphForGlobalDataExchangeModeTests();
+   
streamGraph.setGlobalDataExchangeMode(GlobalDataExchangeMode.ALL_EDGES_PIPELINED);
+   final JobGraph jobGraph = 
StreamingJobGraphGenerator.createJobGraph(streamGraph);
+
+   final List verticesSorted = 
jobGraph.getVerticesSortedTopologicallyFromSources();
+   final JobVertex sourceVertex = verticesSorted.get(0);
+   final JobVertex map1Vertex = verticesSorted.get(1);
+   final JobVertex map2Vertex = verticesSorted.get(2);
+
+   assertEquals(ResultPartitionType.PIPELINED_BOUNDED, 
sourceVertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.PIPELINED_BOUNDED, 
map1Vertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.PIPELINED_BOUNDED, 
map2Vertex.getProducedDataSets().get(0).getResultType());
+   }
+
+   @Test
+   public void testForwardEdgesPipelinedMode() {
+   final StreamGraph streamGraph = 
createStreamGraphForGlobalDataExchangeModeTests();
+   
streamGraph.setGlobalDataExchangeMode(GlobalDataExchangeMode.FORWARD_EDGES_PIPELINED);
+   final JobGraph jobGraph = 
StreamingJobGraphGenerator.createJobGraph(streamGraph);
+
+   final List verticesSorted = 
jobGraph.getVerticesSortedTopologicallyFromSources();
+   final JobVertex sourceVertex = verticesSorted.get(0);
+   final JobVertex map1Vertex = verticesSorted.get(1);
+   final JobVertex map2Vertex = verticesSorted.get(2);
+
+   assertEquals(ResultPartitionType.PIPELINED_BOUNDED, 
sourceVertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.BLOCKING, 
map1Vertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.BLOCKING, 
map2Vertex.getProducedDataSets().get(0).getResultType());
+   }
+
+   @Test
+   public void testPointwiseEdgesPipelinedMode() {
+   final StreamGraph streamGraph = 
createStreamGraphForGlobalDataExchangeModeTests();
+   
streamGraph.setGlobalDataExchangeMode(GlobalDataExchangeMode.POINTWISE_EDGES_PIPELINED);
+   final JobGraph jobGraph = 
StreamingJobGraphGenerator.createJobGraph(streamGraph);
+
+   final List verticesSorted = 
jobGraph.getVerticesSortedTopologicallyFromSources();
+   final JobVertex sourceVertex = verticesSorted.get(0);
+   final JobVertex map1Vertex = verticesSorted.get(1);
+   final JobVertex map2Vertex = verticesSorted.get(2);
+
+   assertEquals(ResultPartitionType.PIPELINED_BOUNDED, 
sourceVertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.PIPELINED_BOUNDED, 
map1Vertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.BLOCKING, 

[GitHub] [flink] zhuzhurk commented on a change in pull request #11774: [FLINK-17020][runtime] Introduce GlobalDataExchangeMode for JobGraph generation

2020-04-23 Thread GitBox


zhuzhurk commented on a change in pull request #11774:
URL: https://github.com/apache/flink/pull/11774#discussion_r414270078



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java
##
@@ -898,6 +825,116 @@ public void 
testSlotSharingOnAllVerticesInSameSlotSharingGroupByDefaultDisabled(
assertDistinctSharingGroups(source1Vertex, source2Vertex, 
map2Vertex);
}
 
+   @Test
+   public void testDefaultGlobalDataExchangeModeIsAllEdgesPipelined() {
+   final StreamGraph streamGraph = 
createStreamGraphForGlobalDataExchangeModeTests();
+   assertThat(streamGraph.getGlobalDataExchangeMode(), 
is(GlobalDataExchangeMode.ALL_EDGES_PIPELINED));
+   }
+
+   @Test
+   public void testAllEdgesBlockingMode() {
+   final StreamGraph streamGraph = 
createStreamGraphForGlobalDataExchangeModeTests();
+   
streamGraph.setGlobalDataExchangeMode(GlobalDataExchangeMode.ALL_EDGES_BLOCKING);
+   final JobGraph jobGraph = 
StreamingJobGraphGenerator.createJobGraph(streamGraph);
+
+   final List verticesSorted = 
jobGraph.getVerticesSortedTopologicallyFromSources();
+   final JobVertex sourceVertex = verticesSorted.get(0);
+   final JobVertex map1Vertex = verticesSorted.get(1);
+   final JobVertex map2Vertex = verticesSorted.get(2);
+
+   assertEquals(ResultPartitionType.BLOCKING, 
sourceVertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.BLOCKING, 
map1Vertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.BLOCKING, 
map2Vertex.getProducedDataSets().get(0).getResultType());
+   }
+
+   @Test
+   public void testAllEdgesPipelinedMode() {
+   final StreamGraph streamGraph = 
createStreamGraphForGlobalDataExchangeModeTests();
+   
streamGraph.setGlobalDataExchangeMode(GlobalDataExchangeMode.ALL_EDGES_PIPELINED);
+   final JobGraph jobGraph = 
StreamingJobGraphGenerator.createJobGraph(streamGraph);
+
+   final List verticesSorted = 
jobGraph.getVerticesSortedTopologicallyFromSources();
+   final JobVertex sourceVertex = verticesSorted.get(0);
+   final JobVertex map1Vertex = verticesSorted.get(1);
+   final JobVertex map2Vertex = verticesSorted.get(2);
+
+   assertEquals(ResultPartitionType.PIPELINED_BOUNDED, 
sourceVertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.PIPELINED_BOUNDED, 
map1Vertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.PIPELINED_BOUNDED, 
map2Vertex.getProducedDataSets().get(0).getResultType());
+   }
+
+   @Test
+   public void testForwardEdgesPipelinedMode() {
+   final StreamGraph streamGraph = 
createStreamGraphForGlobalDataExchangeModeTests();
+   
streamGraph.setGlobalDataExchangeMode(GlobalDataExchangeMode.FORWARD_EDGES_PIPELINED);
+   final JobGraph jobGraph = 
StreamingJobGraphGenerator.createJobGraph(streamGraph);
+
+   final List verticesSorted = 
jobGraph.getVerticesSortedTopologicallyFromSources();
+   final JobVertex sourceVertex = verticesSorted.get(0);
+   final JobVertex map1Vertex = verticesSorted.get(1);
+   final JobVertex map2Vertex = verticesSorted.get(2);
+
+   assertEquals(ResultPartitionType.PIPELINED_BOUNDED, 
sourceVertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.BLOCKING, 
map1Vertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.BLOCKING, 
map2Vertex.getProducedDataSets().get(0).getResultType());
+   }
+
+   @Test
+   public void testPointwiseEdgesPipelinedMode() {
+   final StreamGraph streamGraph = 
createStreamGraphForGlobalDataExchangeModeTests();
+   
streamGraph.setGlobalDataExchangeMode(GlobalDataExchangeMode.POINTWISE_EDGES_PIPELINED);
+   final JobGraph jobGraph = 
StreamingJobGraphGenerator.createJobGraph(streamGraph);
+
+   final List verticesSorted = 
jobGraph.getVerticesSortedTopologicallyFromSources();
+   final JobVertex sourceVertex = verticesSorted.get(0);
+   final JobVertex map1Vertex = verticesSorted.get(1);
+   final JobVertex map2Vertex = verticesSorted.get(2);
+
+   assertEquals(ResultPartitionType.PIPELINED_BOUNDED, 
sourceVertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.PIPELINED_BOUNDED, 
map1Vertex.getProducedDataSets().get(0).getResultType());
+   assertEquals(ResultPartitionType.BLOCKING, 

[jira] [Commented] (FLINK-17334) Flink does not support HIVE UDFs with primitive return types

2020-04-23 Thread xin.ruan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091141#comment-17091141
 ] 

xin.ruan commented on FLINK-17334:
--

Hi all, I want to support array and map types, but I found that I can't get the 
type of key and value in Map type, and can't create the corresponding object 
inspector, I have added the picture to my question。

If I want to create Map's object inspector, I need to call

ObjectInspectorFactory.getStandardMapObjectInspector(getObjectInspector(mapType.getMapKeyTypeInfo()),
 getObjectInspector(mapType.getMapValueTypeInfo()));

And I need to get MapValueTypeInfo() and MapKeyTypeInfo()

>  Flink does not support HIVE UDFs with primitive return types
> -
>
> Key: FLINK-17334
> URL: https://issues.apache.org/jira/browse/FLINK-17334
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: xin.ruan
>Assignee: xin.ruan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.1
>
> Attachments: screenshot-1.png
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We are currently migrating Hive UDF to Flink. While testing compatibility, we 
> found that Flink cannot support primitive types like boolean, int, etc.
> Hive UDF:
> public class UDFTest extends UDF {
>  public boolean evaluate(String content) {
>  if (StringUtils.isEmpty(content))
> { return false; }
> else
> { return true; }
> }
> }
> We found that the following error will be reported:
>  Caused by: org.apache.flink.table.functions.hive.FlinkHiveUDFException: 
> Class boolean is not supported yet
>  at 
> org.apache.flink.table.functions.hive.conversion.HiveInspectors.getObjectInspector(HiveInspectors.java:372)
>  at 
> org.apache.flink.table.functions.hive.HiveSimpleUDF.getHiveResultType(HiveSimpleUDF.java:133)
> I found that if I add the type comparison in 
> HiveInspectors.getObjectInspector to the primitive type, I can get the 
> correct result.
> as follows:
>  public static ObjectInspector getObjectInspector(HiveShim hiveShim, Class 
> clazz){       
>    ..
>           else if (clazz.equals(boolean.class) || clazz.equals(Boolean.class) 
> ||                  clazz.equals(BooleanWritable.class)){                     
>                   
>   typeInfo = TypeInfoFactory.booleanTypeInfo;                 
>                 
>   }
>          ..
> }
>  !screenshot-1.png! 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17086) Flink sql client not able to read parquet hive table because `HiveMapredSplitReader` not supports name mapping reading for parquet format.

2020-04-23 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091140#comment-17091140
 ] 

Rui Li commented on FLINK-17086:


Hi [~leiwangouc], FLINK-16802 has been fixed and you can try whether that fixes 
the issue.

> Flink sql client not able to read parquet hive table because  
> `HiveMapredSplitReader` not supports name mapping reading for parquet format.
> ---
>
> Key: FLINK-17086
> URL: https://issues.apache.org/jira/browse/FLINK-17086
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Lei Wang
>Priority: Major
>
> When writing hive table with parquet format, flink sql client not able to 
> read it correctly because HiveMapredSplitReader not supports name mapping 
> reading for parquet format.
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/fink-sql-client-not-able-to-read-parquet-format-table-td34119.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17334) Flink does not support HIVE UDFs with primitive return types

2020-04-23 Thread xin.ruan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xin.ruan updated FLINK-17334:
-
Attachment: screenshot-1.png

>  Flink does not support HIVE UDFs with primitive return types
> -
>
> Key: FLINK-17334
> URL: https://issues.apache.org/jira/browse/FLINK-17334
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: xin.ruan
>Assignee: xin.ruan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.1
>
> Attachments: screenshot-1.png
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We are currently migrating Hive UDF to Flink. While testing compatibility, we 
> found that Flink cannot support primitive types like boolean, int, etc.
> Hive UDF:
> public class UDFTest extends UDF {
>  public boolean evaluate(String content) {
>  if (StringUtils.isEmpty(content))
> { return false; }
> else
> { return true; }
> }
> }
> We found that the following error will be reported:
>  Caused by: org.apache.flink.table.functions.hive.FlinkHiveUDFException: 
> Class boolean is not supported yet
>  at 
> org.apache.flink.table.functions.hive.conversion.HiveInspectors.getObjectInspector(HiveInspectors.java:372)
>  at 
> org.apache.flink.table.functions.hive.HiveSimpleUDF.getHiveResultType(HiveSimpleUDF.java:133)
> I found that if I add the type comparison in 
> HiveInspectors.getObjectInspector to the primitive type, I can get the 
> correct result.
> as follows:
>  public static ObjectInspector getObjectInspector(HiveShim hiveShim, Class 
> clazz){       
>    ..
>           else if (clazz.equals(boolean.class) || clazz.equals(Boolean.class) 
> ||                  clazz.equals(BooleanWritable.class)){                     
>                   
>   typeInfo = TypeInfoFactory.booleanTypeInfo;                 
>                 
>   }
>          ..
> }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17334) Flink does not support HIVE UDFs with primitive return types

2020-04-23 Thread xin.ruan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xin.ruan updated FLINK-17334:
-
Description: 
We are currently migrating Hive UDF to Flink. While testing compatibility, we 
found that Flink cannot support primitive types like boolean, int, etc.

Hive UDF:

public class UDFTest extends UDF {
 public boolean evaluate(String content) {
 if (StringUtils.isEmpty(content))

{ return false; }

else

{ return true; }

}

}

We found that the following error will be reported:
 Caused by: org.apache.flink.table.functions.hive.FlinkHiveUDFException: Class 
boolean is not supported yet
 at 
org.apache.flink.table.functions.hive.conversion.HiveInspectors.getObjectInspector(HiveInspectors.java:372)
 at 
org.apache.flink.table.functions.hive.HiveSimpleUDF.getHiveResultType(HiveSimpleUDF.java:133)

I found that if I add the type comparison in HiveInspectors.getObjectInspector 
to the primitive type, I can get the correct result.

as follows:
 public static ObjectInspector getObjectInspector(HiveShim hiveShim, Class 
clazz){       

   ..

          else if (clazz.equals(boolean.class) || clazz.equals(Boolean.class) 
||                  clazz.equals(BooleanWritable.class)){                       
                

  typeInfo = TypeInfoFactory.booleanTypeInfo;                   
              
  }

         ..

}

 !screenshot-1.png! 


 

  was:
We are currently migrating Hive UDF to Flink. While testing compatibility, we 
found that Flink cannot support primitive types like boolean, int, etc.

Hive UDF:

public class UDFTest extends UDF {
 public boolean evaluate(String content) {
 if (StringUtils.isEmpty(content))

{ return false; }

else

{ return true; }

}

}

We found that the following error will be reported:
 Caused by: org.apache.flink.table.functions.hive.FlinkHiveUDFException: Class 
boolean is not supported yet
 at 
org.apache.flink.table.functions.hive.conversion.HiveInspectors.getObjectInspector(HiveInspectors.java:372)
 at 
org.apache.flink.table.functions.hive.HiveSimpleUDF.getHiveResultType(HiveSimpleUDF.java:133)

I found that if I add the type comparison in HiveInspectors.getObjectInspector 
to the primitive type, I can get the correct result.

as follows:
 public static ObjectInspector getObjectInspector(HiveShim hiveShim, Class 
clazz){       

   ..

          else if (clazz.equals(boolean.class) || clazz.equals(Boolean.class) 
||                  clazz.equals(BooleanWritable.class)){                       
                

  typeInfo = TypeInfoFactory.booleanTypeInfo;                   
              
  }

         ..

}




 


>  Flink does not support HIVE UDFs with primitive return types
> -
>
> Key: FLINK-17334
> URL: https://issues.apache.org/jira/browse/FLINK-17334
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: xin.ruan
>Assignee: xin.ruan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.1
>
> Attachments: screenshot-1.png
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We are currently migrating Hive UDF to Flink. While testing compatibility, we 
> found that Flink cannot support primitive types like boolean, int, etc.
> Hive UDF:
> public class UDFTest extends UDF {
>  public boolean evaluate(String content) {
>  if (StringUtils.isEmpty(content))
> { return false; }
> else
> { return true; }
> }
> }
> We found that the following error will be reported:
>  Caused by: org.apache.flink.table.functions.hive.FlinkHiveUDFException: 
> Class boolean is not supported yet
>  at 
> org.apache.flink.table.functions.hive.conversion.HiveInspectors.getObjectInspector(HiveInspectors.java:372)
>  at 
> org.apache.flink.table.functions.hive.HiveSimpleUDF.getHiveResultType(HiveSimpleUDF.java:133)
> I found that if I add the type comparison in 
> HiveInspectors.getObjectInspector to the primitive type, I can get the 
> correct result.
> as follows:
>  public static ObjectInspector getObjectInspector(HiveShim hiveShim, Class 
> clazz){       
>    ..
>           else if (clazz.equals(boolean.class) || clazz.equals(Boolean.class) 
> ||                  clazz.equals(BooleanWritable.class)){                     
>                   
>   typeInfo = TypeInfoFactory.booleanTypeInfo;                 
>                 
>   }
>          ..
> }
>  !screenshot-1.png! 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14543) Support partition for temporary table

2020-04-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-14543:
-
Fix Version/s: 1.11.0

> Support partition for temporary table
> -
>
> Key: FLINK-14543
> URL: https://issues.apache.org/jira/browse/FLINK-14543
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14543) Support partition for temporary table

2020-04-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-14543.

Resolution: Fixed

master: 6976e50e68f64d133fbf223a56db4a8b09e079cb

> Support partition for temporary table
> -
>
> Key: FLINK-14543
> URL: https://issues.apache.org/jira/browse/FLINK-14543
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15901) Support partitioned generic table in HiveCatalog

2020-04-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-15901.

Resolution: Fixed

master: 3a30886e300835e158177944f734a47453a4beab

> Support partitioned generic table in HiveCatalog
> 
>
> Key: FLINK-15901
> URL: https://issues.apache.org/jira/browse/FLINK-15901
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Rui Li
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> We disallowed partitioned generic table in HiveCatalog as part of 
> FLINK-15858. This ticket aims to remove that limitation and enable the 
> related tests in {{HiveCatalogGenericMetadataTest}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #11894: [WIP][FLINK-10911] Flink's flink-scala-shell is not working with Scala 2.12

2020-04-23 Thread GitBox


flinkbot commented on pull request #11894:
URL: https://github.com/apache/flink/pull/11894#issuecomment-618783169


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 191da9e8a608bde3657193779bc9f0a48350f36d (Fri Apr 24 
03:35:24 UTC 2020)
   
   **Warnings:**
* **2 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-10911).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-14356) Introduce "single-field" format to (de)serialize message to a single field

2020-04-23 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-14356:

Fix Version/s: 1.11.0

> Introduce "single-field" format to (de)serialize message to a single field
> --
>
> Key: FLINK-14356
> URL: https://issues.apache.org/jira/browse/FLINK-14356
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: jinfeng
>Assignee: jinfeng
>Priority: Major
> Fix For: 1.11.0
>
>
> I want to use flink sql to write kafka messages directly to hdfs. The 
> serialization and deserialization of messages are not involved in the middle. 
>  The bytes of the message directly convert the first field of Row.  However, 
> the current RowSerializationSchema does not support the conversion of bytes 
> to VARBINARY. Can we add some special RowSerializationSchema and 
> RowDerializationSchema ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17334) Flink does not support HIVE UDFs with primitive return types

2020-04-23 Thread xin.ruan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xin.ruan updated FLINK-17334:
-
Description: 
We are currently migrating Hive UDF to Flink. While testing compatibility, we 
found that Flink cannot support primitive types like boolean, int, etc.

Hive UDF:

public class UDFTest extends UDF {
 public boolean evaluate(String content) {
 if (StringUtils.isEmpty(content))

{ return false; }

else

{ return true; }

}

}

We found that the following error will be reported:
 Caused by: org.apache.flink.table.functions.hive.FlinkHiveUDFException: Class 
boolean is not supported yet
 at 
org.apache.flink.table.functions.hive.conversion.HiveInspectors.getObjectInspector(HiveInspectors.java:372)
 at 
org.apache.flink.table.functions.hive.HiveSimpleUDF.getHiveResultType(HiveSimpleUDF.java:133)

I found that if I add the type comparison in HiveInspectors.getObjectInspector 
to the primitive type, I can get the correct result.

as follows:
 public static ObjectInspector getObjectInspector(HiveShim hiveShim, Class 
clazz){       

   ..

          else if (clazz.equals(boolean.class) || clazz.equals(Boolean.class) 
||                  clazz.equals(BooleanWritable.class)){                       
                

  typeInfo = TypeInfoFactory.booleanTypeInfo;                   
              
  }

         ..

}




 

  was:
We are currently migrating Hive UDF to Flink. While testing compatibility, we 
found that Flink cannot support primitive types like boolean, int, etc.

Hive UDF:

public class UDFTest extends UDF {
 public boolean evaluate(String content) {
 if (StringUtils.isEmpty(content))

{ return false; }

else

{ return true; }

}

}

We found that the following error will be reported:
 Caused by: org.apache.flink.table.functions.hive.FlinkHiveUDFException: Class 
boolean is not supported yet
 at 
org.apache.flink.table.functions.hive.conversion.HiveInspectors.getObjectInspector(HiveInspectors.java:372)
 at 
org.apache.flink.table.functions.hive.HiveSimpleUDF.getHiveResultType(HiveSimpleUDF.java:133)

I found that if I add the type comparison in HiveInspectors.getObjectInspector 
to the primitive type, I can get the correct result.

as follows:
 public static ObjectInspector getObjectInspector(HiveShim hiveShim, Class 
clazz){       

   ..

          else if (clazz.equals(boolean.class) || clazz.equals(Boolean.class) 
||                  clazz.equals(BooleanWritable.class)){                       
                

  typeInfo = TypeInfoFactory.booleanTypeInfo;                   
              
  }

         ..

}

 

 


>  Flink does not support HIVE UDFs with primitive return types
> -
>
> Key: FLINK-17334
> URL: https://issues.apache.org/jira/browse/FLINK-17334
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: xin.ruan
>Assignee: xin.ruan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.1
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We are currently migrating Hive UDF to Flink. While testing compatibility, we 
> found that Flink cannot support primitive types like boolean, int, etc.
> Hive UDF:
> public class UDFTest extends UDF {
>  public boolean evaluate(String content) {
>  if (StringUtils.isEmpty(content))
> { return false; }
> else
> { return true; }
> }
> }
> We found that the following error will be reported:
>  Caused by: org.apache.flink.table.functions.hive.FlinkHiveUDFException: 
> Class boolean is not supported yet
>  at 
> org.apache.flink.table.functions.hive.conversion.HiveInspectors.getObjectInspector(HiveInspectors.java:372)
>  at 
> org.apache.flink.table.functions.hive.HiveSimpleUDF.getHiveResultType(HiveSimpleUDF.java:133)
> I found that if I add the type comparison in 
> HiveInspectors.getObjectInspector to the primitive type, I can get the 
> correct result.
> as follows:
>  public static ObjectInspector getObjectInspector(HiveShim hiveShim, Class 
> clazz){       
>    ..
>           else if (clazz.equals(boolean.class) || clazz.equals(Boolean.class) 
> ||                  clazz.equals(BooleanWritable.class)){                     
>                   
>   typeInfo = TypeInfoFactory.booleanTypeInfo;                 
>                 
>   }
>          ..
> }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10911) Flink's flink-scala-shell is not working with Scala 2.12

2020-04-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10911:
---
Labels: pull-request-available  (was: )

> Flink's flink-scala-shell is not working with Scala 2.12
> 
>
> Key: FLINK-10911
> URL: https://issues.apache.org/jira/browse/FLINK-10911
> Project: Flink
>  Issue Type: Bug
>  Components: Scala Shell
>Affects Versions: 1.7.0
>Reporter: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
>
> Flink's {{flink-scala-shell}} module is not working with Scala 2.12. 
> Therefore, it is currently excluded from the Scala 2.12 builds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zjffdu opened a new pull request #11894: [WIP][FLINK-10911] Flink's flink-scala-shell is not working with Scala 2.12

2020-04-23 Thread GitBox


zjffdu opened a new pull request #11894:
URL: https://github.com/apache/flink/pull/11894


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-16803) Need to make sure partition inherit table spec when writing to Hive tables

2020-04-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-16803.


> Need to make sure partition inherit table spec when writing to Hive tables
> --
>
> Key: FLINK-16803
> URL: https://issues.apache.org/jira/browse/FLINK-16803
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When inserting to a partition, Hive will update the partition's SD according 
> to table SD. For example, consider the following use case:
> {code}
> create table foo (x int,y int) partitioned by (p string);
> insert overwrite table foo partition (p='1') select * from ...;
> alter table foo set fileformat rcfile;
> insert overwrite table foo partition (p='1') select * from ...;
> {code}
> The second INSERT will write RC files to the partition and therefore the 
> partition SD needs to be updated to use RC file input format. Otherwise we 
> hit exception when reading from this partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16803) Need to make sure partition inherit table spec when writing to Hive tables

2020-04-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-16803:
-
Affects Version/s: 1.10.0

> Need to make sure partition inherit table spec when writing to Hive tables
> --
>
> Key: FLINK-16803
> URL: https://issues.apache.org/jira/browse/FLINK-16803
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When inserting to a partition, Hive will update the partition's SD according 
> to table SD. For example, consider the following use case:
> {code}
> create table foo (x int,y int) partitioned by (p string);
> insert overwrite table foo partition (p='1') select * from ...;
> alter table foo set fileformat rcfile;
> insert overwrite table foo partition (p='1') select * from ...;
> {code}
> The second INSERT will write RC files to the partition and therefore the 
> partition SD needs to be updated to use RC file input format. Otherwise we 
> hit exception when reading from this partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16802) Set schema info in JobConf for Hive readers

2020-04-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-16802.

Fix Version/s: 1.11.0
 Assignee: Rui Li
   Resolution: Fixed

master: 43cf121e77f8d001d1f601a23cc8d10e872bd747

> Set schema info in JobConf for Hive readers
> ---
>
> Key: FLINK-16802
> URL: https://issues.apache.org/jira/browse/FLINK-16802
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16802) Set schema info in JobConf for Hive readers

2020-04-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-16802:
-
Affects Version/s: 1.10.0

> Set schema info in JobConf for Hive readers
> ---
>
> Key: FLINK-16802
> URL: https://issues.apache.org/jira/browse/FLINK-16802
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on pull request #11791: [FLINK-17210][sql-parser][hive] Implement database DDLs for Hive dialect

2020-04-23 Thread GitBox


JingsongLi commented on pull request #11791:
URL: https://github.com/apache/flink/pull/11791#issuecomment-618782334


   CC: @danny0405 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-16803) Need to make sure partition inherit table spec when writing to Hive tables

2020-04-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-16803:


Assignee: Rui Li

> Need to make sure partition inherit table spec when writing to Hive tables
> --
>
> Key: FLINK-16803
> URL: https://issues.apache.org/jira/browse/FLINK-16803
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When inserting to a partition, Hive will update the partition's SD according 
> to table SD. For example, consider the following use case:
> {code}
> create table foo (x int,y int) partitioned by (p string);
> insert overwrite table foo partition (p='1') select * from ...;
> alter table foo set fileformat rcfile;
> insert overwrite table foo partition (p='1') select * from ...;
> {code}
> The second INSERT will write RC files to the partition and therefore the 
> partition SD needs to be updated to use RC file input format. Otherwise we 
> hit exception when reading from this partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #11884: [FLINK-17345][python][table] Support register and get Python UDF in Catalog.

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11884:
URL: https://github.com/apache/flink/pull/11884#issuecomment-618376627


   
   ## CI report:
   
   * 570f7e22326fc2f2d9f7dff4066192b604e7c479 Travis: 
[CANCELED](https://travis-ci.com/github/flink-ci/flink/builds/161749776) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=154)
 
   * 3579637b38484fb5589c309fea8e803ad3c2e4e0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #11857: [FLINK-17180][runtime] Implement SchedulingPipelinedRegion interface

2020-04-23 Thread GitBox


zhuzhurk commented on a change in pull request #11857:
URL: https://github.com/apache/flink/pull/11857#discussion_r414264754



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/RestartPipelinedRegionFailoverStrategy.java
##
@@ -84,34 +77,8 @@ public RestartPipelinedRegionFailoverStrategy(
ResultPartitionAvailabilityChecker 
resultPartitionAvailabilityChecker) {
 
this.topology = checkNotNull(topology);
-   this.regions = Collections.newSetFromMap(new 
IdentityHashMap<>());
-   this.vertexToRegionMap = new HashMap<>();
this.resultPartitionAvailabilityChecker = new 
RegionFailoverResultPartitionAvailabilityChecker(
resultPartitionAvailabilityChecker);
-
-   // build regions based on the given topology
-   LOG.info("Start building failover regions.");
-   buildFailoverRegions();
-   }
-   // 

-   //  region building
-   // 

-
-   private void buildFailoverRegions() {
-   final Set> distinctRegions =
-   
PipelinedRegionComputeUtil.computePipelinedRegions(topology);
-
-   // creating all the failover regions and register them
-   for (Set regionVertices : 
distinctRegions) {
-   LOG.debug("Creating a failover region with {} 
vertices.", regionVertices.size());
-   final FailoverRegion failoverRegion = new 
FailoverRegion(regionVertices);
-   regions.add(failoverRegion);
-   for (SchedulingExecutionVertex vertex : regionVertices) 
{
-   vertexToRegionMap.put(vertex.getId(), 
failoverRegion);
-   }
-   }
-
-   LOG.info("Created {} failover regions.", regions.size());

Review comment:
   I'd prefer to make it INFO. Regions are basic components now but users 
can get little information on that.
   And it is a short and one time log so it would not mess up the logs or cause 
any performance issue.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11791: [FLINK-17210][sql-parser][hive] Implement database DDLs for Hive dialect

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11791:
URL: https://github.com/apache/flink/pull/11791#issuecomment-615162346


   
   ## CI report:
   
   * a6642cdfead306f1192f73c861e7201af9d38c56 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/160705250) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7651)
 
   * 6359340c0b5b01923d7ac2824b9eb1071e0f2f37 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN
   * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN
   * 4f7399680818f5d29e917e17720e00900822a43d UNKNOWN
   * 672a5e5b1a1c4607cb5e5e191877484f472dad1b Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161699589) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=146)
 
   * 12c398dd2dc5b6716a590490575914698573ab76 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161752886) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=157)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #11857: [FLINK-17180][runtime] Implement SchedulingPipelinedRegion interface

2020-04-23 Thread GitBox


zhuzhurk commented on a change in pull request #11857:
URL: https://github.com/apache/flink/pull/11857#discussion_r414264754



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/RestartPipelinedRegionFailoverStrategy.java
##
@@ -84,34 +77,8 @@ public RestartPipelinedRegionFailoverStrategy(
ResultPartitionAvailabilityChecker 
resultPartitionAvailabilityChecker) {
 
this.topology = checkNotNull(topology);
-   this.regions = Collections.newSetFromMap(new 
IdentityHashMap<>());
-   this.vertexToRegionMap = new HashMap<>();
this.resultPartitionAvailabilityChecker = new 
RegionFailoverResultPartitionAvailabilityChecker(
resultPartitionAvailabilityChecker);
-
-   // build regions based on the given topology
-   LOG.info("Start building failover regions.");
-   buildFailoverRegions();
-   }
-   // 

-   //  region building
-   // 

-
-   private void buildFailoverRegions() {
-   final Set> distinctRegions =
-   
PipelinedRegionComputeUtil.computePipelinedRegions(topology);
-
-   // creating all the failover regions and register them
-   for (Set regionVertices : 
distinctRegions) {
-   LOG.debug("Creating a failover region with {} 
vertices.", regionVertices.size());
-   final FailoverRegion failoverRegion = new 
FailoverRegion(regionVertices);
-   regions.add(failoverRegion);
-   for (SchedulingExecutionVertex vertex : regionVertices) 
{
-   vertexToRegionMap.put(vertex.getId(), 
failoverRegion);
-   }
-   }
-
-   LOG.info("Created {} failover regions.", regions.size());

Review comment:
   I'd prefer to make it INFO. Regions are basic components now but users 
can get little information on that.
   And it is a short and one time log so it would not mess up the logs.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16662) Blink Planner failed to generate JobGraph for POJO DataStream converting to Table (Cannot determine simple type name)

2020-04-23 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091129#comment-17091129
 ] 

Jark Wu commented on FLINK-16662:
-

Btw, your approach is very similar to an end-to-end test. 

> Blink Planner failed to generate JobGraph for POJO DataStream converting to 
> Table (Cannot determine simple type name)
> -
>
> Key: FLINK-16662
> URL: https://issues.apache.org/jira/browse/FLINK-16662
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: chenxyz
>Assignee: LionelZ
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>
> When using Blink Palnner to convert a POJO DataStream to a Table, Blink will 
> generate and compile the SourceConversion$1 code. If the Jar task is 
> submitted to Flink, since the UserCodeClassLoader is not used when generating 
> the JobGraph, the ClassLoader(AppClassLoader) of the compiled code cannot 
> load the POJO class in the Jar package, so the following error will be 
> reported:
>  
> {code:java}
> Caused by: org.codehaus.commons.compiler.CompileException: Line 27, Column 
> 174: Cannot determine simple type name "net"Caused by: 
> org.codehaus.commons.compiler.CompileException: Line 27, Column 174: Cannot 
> determine simple type name "net" at 
> org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:12124) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6746) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6507) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6486) at 
> org.codehaus.janino.UnitCompiler.access$13800(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$21$1.visitReferenceType(UnitCompiler.java:6394)
>  at 
> org.codehaus.janino.UnitCompiler$21$1.visitReferenceType(UnitCompiler.java:6389)
>  at org.codehaus.janino.Java$ReferenceType.accept(Java.java:3917) at 
> org.codehaus.janino.UnitCompiler$21.visitType(UnitCompiler.java:6389) at 
> org.codehaus.janino.UnitCompiler$21.visitType(UnitCompiler.java:6382) at 
> org.codehaus.janino.Java$ReferenceType.accept(Java.java:3916) at 
> org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6382) at 
> org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:7009) at 
> org.codehaus.janino.UnitCompiler.access$15200(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$21$2.visitCast(UnitCompiler.java:6425) at 
> org.codehaus.janino.UnitCompiler$21$2.visitCast(UnitCompiler.java:6403) at 
> org.codehaus.janino.Java$Cast.accept(Java.java:4887) at 
> org.codehaus.janino.UnitCompiler$21.visitRvalue(UnitCompiler.java:6403) at 
> org.codehaus.janino.UnitCompiler$21.visitRvalue(UnitCompiler.java:6382) at 
> org.codehaus.janino.Java$Rvalue.accept(Java.java:4105) at 
> org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6382) at 
> org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:9150)
>  at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:9036) at 
> org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:8938) at 
> org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5060) at 
> org.codehaus.janino.UnitCompiler.access$9100(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$16.visitMethodInvocation(UnitCompiler.java:4421)
>  at 
> org.codehaus.janino.UnitCompiler$16.visitMethodInvocation(UnitCompiler.java:4394)
>  at org.codehaus.janino.Java$MethodInvocation.accept(Java.java:5062) at 
> org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4394) at 
> org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:5575) at 
> org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5019) at 
> org.codehaus.janino.UnitCompiler.access$8600(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$16.visitCast(UnitCompiler.java:4416) at 
> org.codehaus.janino.UnitCompiler$16.visitCast(UnitCompiler.java:4394) at 
> 

[jira] [Commented] (FLINK-16662) Blink Planner failed to generate JobGraph for POJO DataStream converting to Table (Cannot determine simple type name)

2020-04-23 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091128#comment-17091128
 ] 

Jark Wu commented on FLINK-16662:
-

Hi [~chenxyz], I think a better test is to add a real user case in end-to-end 
tests. A simple idea is that we can extend {{StreamSQLTestProgram}} a bit, to 
register a UDF which uses user defined POJO class, and call the UDF in the SQL 
pipeline. I guess this can reproduce this problem. 

> Blink Planner failed to generate JobGraph for POJO DataStream converting to 
> Table (Cannot determine simple type name)
> -
>
> Key: FLINK-16662
> URL: https://issues.apache.org/jira/browse/FLINK-16662
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: chenxyz
>Assignee: LionelZ
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>
> When using Blink Palnner to convert a POJO DataStream to a Table, Blink will 
> generate and compile the SourceConversion$1 code. If the Jar task is 
> submitted to Flink, since the UserCodeClassLoader is not used when generating 
> the JobGraph, the ClassLoader(AppClassLoader) of the compiled code cannot 
> load the POJO class in the Jar package, so the following error will be 
> reported:
>  
> {code:java}
> Caused by: org.codehaus.commons.compiler.CompileException: Line 27, Column 
> 174: Cannot determine simple type name "net"Caused by: 
> org.codehaus.commons.compiler.CompileException: Line 27, Column 174: Cannot 
> determine simple type name "net" at 
> org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:12124) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6746) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6507) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520) at 
> org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6486) at 
> org.codehaus.janino.UnitCompiler.access$13800(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$21$1.visitReferenceType(UnitCompiler.java:6394)
>  at 
> org.codehaus.janino.UnitCompiler$21$1.visitReferenceType(UnitCompiler.java:6389)
>  at org.codehaus.janino.Java$ReferenceType.accept(Java.java:3917) at 
> org.codehaus.janino.UnitCompiler$21.visitType(UnitCompiler.java:6389) at 
> org.codehaus.janino.UnitCompiler$21.visitType(UnitCompiler.java:6382) at 
> org.codehaus.janino.Java$ReferenceType.accept(Java.java:3916) at 
> org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6382) at 
> org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:7009) at 
> org.codehaus.janino.UnitCompiler.access$15200(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$21$2.visitCast(UnitCompiler.java:6425) at 
> org.codehaus.janino.UnitCompiler$21$2.visitCast(UnitCompiler.java:6403) at 
> org.codehaus.janino.Java$Cast.accept(Java.java:4887) at 
> org.codehaus.janino.UnitCompiler$21.visitRvalue(UnitCompiler.java:6403) at 
> org.codehaus.janino.UnitCompiler$21.visitRvalue(UnitCompiler.java:6382) at 
> org.codehaus.janino.Java$Rvalue.accept(Java.java:4105) at 
> org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6382) at 
> org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:9150)
>  at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:9036) at 
> org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:8938) at 
> org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5060) at 
> org.codehaus.janino.UnitCompiler.access$9100(UnitCompiler.java:215) at 
> org.codehaus.janino.UnitCompiler$16.visitMethodInvocation(UnitCompiler.java:4421)
>  at 
> org.codehaus.janino.UnitCompiler$16.visitMethodInvocation(UnitCompiler.java:4394)
>  at org.codehaus.janino.Java$MethodInvocation.accept(Java.java:5062) at 
> org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4394) at 
> org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:5575) at 
> org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5019) at 
> 

[GitHub] [flink] zhuzhurk commented on pull request #11855: [FLINK-13639] Refactor the IntermediateResultPartitionID to consist o…

2020-04-23 Thread GitBox


zhuzhurk commented on pull request #11855:
URL: https://github.com/apache/flink/pull/11855#issuecomment-618779582


   Seems there was something wrong with Flink travis.
   Would you rebase the change and trigger the travis again?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #11874: [FLINK-16935][table-planner-blink] Enable or delete most of the ignored test cases in blink planner.

2020-04-23 Thread GitBox


wuchong commented on a change in pull request #11874:
URL: https://github.com/apache/flink/pull/11874#discussion_r414251140



##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/DecimalTypeTest.scala
##
@@ -129,23 +130,6 @@ class DecimalTypeTest extends ExpressionTestBase {
   Long.MinValue.toString)
   }
 
-  @Ignore
-  @Test
-  def testDefaultDecimalCasting(): Unit = {

Review comment:
   Shoud we fix this problem? This test is moved from old planner 
`DecimalTypeTest#testDecimalCasting`. And in blink planner, it will throw 
exception when `cast` using `TypeInformation`.   
   
   ```
   java.lang.ClassCastException: 
org.apache.flink.table.types.logical.LegacyTypeInformationType cannot be cast 
to org.apache.flink.table.types.logical.DecimalType
   
at 
org.apache.flink.table.planner.calcite.FlinkTypeFactory.newRelDataType$1(FlinkTypeFactory.scala:95)
at 
org.apache.flink.table.planner.calcite.FlinkTypeFactory.createFieldTypeFromLogicalType(FlinkTypeFactory.scala:146)
at 
org.apache.flink.table.planner.expressions.converter.CustomizedConvertRule.convertCast(CustomizedConvertRule.java:106)
at 
org.apache.flink.table.planner.expressions.converter.CustomizedConvertRule.lambda$convert$0(CustomizedConvertRule.java:98)
at java.util.Optional.map(Optional.java:215)
at 
org.apache.flink.table.planner.expressions.converter.CustomizedConvertRule.convert(CustomizedConvertRule.java:98)
at 
org.apache.flink.table.planner.expressions.converter.ExpressionConverter.visit(ExpressionConverter.java:97)
at 
org.apache.flink.table.planner.expressions.converter.ExpressionConverter.visit(ExpressionConverter.java:72)
at 
org.apache.flink.table.expressions.CallExpression.accept(CallExpression.java:122)
at 
org.apache.flink.table.planner.plan.QueryOperationConverter.convertExprToRexNode(QueryOperationConverter.java:741)
at 
org.apache.flink.table.planner.plan.QueryOperationConverter.access$800(QueryOperationConverter.java:132)
at 
org.apache.flink.table.planner.plan.QueryOperationConverter$SingleRelVisitor.lambda$convertToRexNodes$6(QueryOperationConverter.java:547)
at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.flink.table.planner.plan.QueryOperationConverter$SingleRelVisitor.convertToRexNodes(QueryOperationConverter.java:548)
at 
org.apache.flink.table.planner.plan.QueryOperationConverter$SingleRelVisitor.visit(QueryOperationConverter.java:156)
at 
org.apache.flink.table.planner.plan.QueryOperationConverter$SingleRelVisitor.visit(QueryOperationConverter.java:152)
at 
org.apache.flink.table.operations.ProjectQueryOperation.accept(ProjectQueryOperation.java:75)
at 
org.apache.flink.table.planner.plan.QueryOperationConverter.defaultMethod(QueryOperationConverter.java:149)
at 
org.apache.flink.table.planner.plan.QueryOperationConverter.defaultMethod(QueryOperationConverter.java:131)
at 
org.apache.flink.table.operations.utils.QueryOperationDefaultVisitor.visit(QueryOperationDefaultVisitor.java:47)
at 
org.apache.flink.table.operations.ProjectQueryOperation.accept(ProjectQueryOperation.java:75)
at 
org.apache.flink.table.planner.calcite.FlinkRelBuilder.queryOperation(FlinkRelBuilder.scala:176)
at 
org.apache.flink.table.planner.expressions.utils.ExpressionTestBase.addTableApiTestExpr(ExpressionTestBase.scala:242)
at 
org.apache.flink.table.planner.expressions.utils.ExpressionTestBase.addTableApiTestExpr(ExpressionTestBase.scala:236)
at 
org.apache.flink.table.planner.expressions.utils.ExpressionTestBase.testTableApi(ExpressionTestBase.scala:226)
at 
org.apache.flink.table.planner.expressions.DecimalTypeTest.testDecimalCasting(DecimalTypeTest.scala:136)
   ```

##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/NonDeterministicTests.scala
##
@@ -23,71 +23,65 @@ import org.apache.flink.table.api.scala._
 import org.apache.flink.table.planner.expressions.utils.ExpressionTestBase
 import org.apache.flink.types.Row
 
-import org.junit.{Ignore, Test}
+import org.junit.Test
 
 /**
   * Tests that can only be checked manually as they are non-deterministic.
   */
 class NonDeterministicTests extends ExpressionTestBase {
 
-  

[GitHub] [flink] flinkbot edited a comment on pull request #11884: [FLINK-17345][python][table] Support register and get Python UDF in Catalog.

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11884:
URL: https://github.com/apache/flink/pull/11884#issuecomment-618376627


   
   ## CI report:
   
   * ecd663181bfe20940c7c2ff311100f9007316392 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161626047) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=133)
 
   * 570f7e22326fc2f2d9f7dff4066192b604e7c479 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161749776) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=154)
 
   * 3579637b38484fb5589c309fea8e803ad3c2e4e0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11893: [FLINK-17344] Fix unstability of RecordWriterTest. testIdleTime and TaskMailboxProcessorTest.testIdleTime

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11893:
URL: https://github.com/apache/flink/pull/11893#issuecomment-618771008


   
   ## CI report:
   
   * b6a842dcaf10cca2b72a0155c1be580c5d9a3448 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/161751204) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=156)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-04-23 Thread GitBox


flinkbot edited a comment on pull request #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN
   * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN
   * 4f7399680818f5d29e917e17720e00900822a43d UNKNOWN
   * 672a5e5b1a1c4607cb5e5e191877484f472dad1b Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/161699589) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=146)
 
   * 12c398dd2dc5b6716a590490575914698573ab76 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on a change in pull request #11854: [WIP] Introduce external resource framework

2020-04-23 Thread GitBox


KarmaGYZ commented on a change in pull request #11854:
URL: https://github.com/apache/flink/pull/11854#discussion_r414256108



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/externalresource/ExternalResourceUtils.java
##
@@ -76,4 +78,43 @@
 
return externalResourceConfigs;
}
+
+   /**
+* Instantiate the {@link ExternalResourceDriver}s for all of enabled 
external resources. {@link ExternalResourceDriver}s
+* are mapped by its resource name.
+*/
+   public static Map 
externalResourceDriversFromConfig(Configuration config, PluginManager 
pluginManager) throws Exception {
+   final Set resourceSet = getExternalResourceList(config);
+   LOG.info("Enabled external resources: {}", resourceSet);
+
+   if (resourceSet.isEmpty()) {
+   return Collections.emptyMap();
+   }
+
+   final Iterator factoryIterator = 
pluginManager.load(ExternalResourceDriverFactory.class);
+   final Map 
externalResourceFactories = new HashMap<>();
+   factoryIterator.forEachRemaining(
+   externalResourceDriverFactory -> {
+   
externalResourceFactories.put(externalResourceDriverFactory.getClass().getName(),
 externalResourceDriverFactory);
+   });
+
+   final Map 
externalResourceDrivers = new HashMap<>();
+   for (String resourceName: resourceSet) {
+   final String driverFactoryClassName = 
config.getString(ExternalResourceConstants.EXTERNAL_RESOURCE_PREFIX + 
resourceName + 
ExternalResourceConstants.EXTERNAL_RESOURCE_DRIVER_FACTORY_SUFFIX, "");
+   if 
(StringUtils.isNullOrWhitespaceOnly(driverFactoryClassName)) {
+   LOG.warn("Could not found driver class name for 
{}. Please make sure {}{}{} is configured.",
+   resourceName, 
ExternalResourceConstants.EXTERNAL_RESOURCE_PREFIX, resourceName, 
ExternalResourceConstants.EXTERNAL_RESOURCE_DRIVER_FACTORY_SUFFIX);
+   continue;
+   }
+
+   if 
(externalResourceFactories.containsKey(driverFactoryClassName)) {
+   externalResourceDrivers.put(resourceName, 
externalResourceFactories.get(driverFactoryClassName).createExternalResourceDriver(config));
+   LOG.info("Add external resources driver for 
{}.", resourceName);
+   } else {
+   LOG.error("Could not find factory class {} for 
{}. The information might not be exposed/reported.", driverFactoryClassName, 
resourceName);

Review comment:
   I think it should be error level because user explicit set the 
`driverFactoryClassName` but we could not find that class from `PluginManager`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] WeiZhong94 commented on pull request #11884: [FLINK-17345][python][table] Support register and get Python UDF in Catalog.

2020-04-23 Thread GitBox


WeiZhong94 commented on pull request #11884:
URL: https://github.com/apache/flink/pull/11884#issuecomment-618773790


   @dianfu Thanks for your review! I have addressed you comments in the latest 
commit.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…

2020-04-23 Thread GitBox


wuchong commented on a change in pull request #11837:
URL: https://github.com/apache/flink/pull/11837#discussion_r414255501



##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/TableSourceTest.scala
##
@@ -130,6 +131,60 @@ class TableSourceTest extends TableTestBase {
 util.verifyPlan(sqlQuery)
   }
 
+
+  @Test
+  def testLegacyRowTimeTableGroupWindow(): Unit = {
+util.tableEnv.connect(new ConnectorDescriptor("TestTableSourceWithTime", 
1, false) {
+  override protected def toConnectorProperties: JMap[String, String] = {
+Collections.emptyMap()
+  }

Review comment:
   Introducing a generic connector descriptor is another topic, and I'm 
concerned `CustomConnectorDescriptor` is not easy-to-use enough. 
   
   Maybe we can introduce a general `TestConnectorDescriptor` in tests.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] WeiZhong94 commented on a change in pull request #11884: [FLINK-17345][python][table] Support register and get Python UDF in Catalog.

2020-04-23 Thread GitBox


WeiZhong94 commented on a change in pull request #11884:
URL: https://github.com/apache/flink/pull/11884#discussion_r414255081



##
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/FunctionCatalog.java
##
@@ -194,22 +194,32 @@ public void registerCatalogFunction(
UnresolvedIdentifier unresolvedIdentifier,
Class functionClass,
boolean ignoreIfExists) {
-   final ObjectIdentifier identifier = 
catalogManager.qualifyIdentifier(unresolvedIdentifier);
-   final ObjectIdentifier normalizedIdentifier = 
FunctionIdentifier.normalizeObjectIdentifier(identifier);
 
try {
UserDefinedFunctionHelper.validateClass(functionClass);
} catch (Throwable t) {
throw new ValidationException(
String.format(
"Could not register catalog function 
'%s' due to implementation errors.",
-   identifier.asSummaryString()),
+   
catalogManager.qualifyIdentifier(unresolvedIdentifier).asSummaryString()),
t);
}
 
-   final Catalog catalog = 
catalogManager.getCatalog(normalizedIdentifier.getCatalogName())
-   .orElseThrow(IllegalStateException::new);
-   final ObjectPath path = identifier.toObjectPath();
+   final CatalogFunction catalogFunction = new CatalogFunctionImpl(

Review comment:
   The initial reason of this change is provide an approach in 
`FunctionCatalog` to register Python CatalogFunction. But it seems not 
necessary now because the upper layer accesses the Catalog object directly.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] WeiZhong94 commented on a change in pull request #11884: [FLINK-17345][python][table] Support register and get Python UDF in Catalog.

2020-04-23 Thread GitBox


WeiZhong94 commented on a change in pull request #11884:
URL: https://github.com/apache/flink/pull/11884#discussion_r414254201



##
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogFunctionImpl.java
##
@@ -67,6 +67,9 @@ public CatalogFunction copy() {
 
@Override
public boolean isGeneric() {
+   if (functionLanguage == FunctionLanguage.PYTHON) {

Review comment:
   Currently the Python UDF is always generic and can not be loaded via 
`Class.forName`. So just return true if the CatalogFunction is a Python UDF.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] docete commented on a change in pull request #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…

2020-04-23 Thread GitBox


docete commented on a change in pull request #11837:
URL: https://github.com/apache/flink/pull/11837#discussion_r414254173



##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/TableSourceTest.scala
##
@@ -130,6 +131,60 @@ class TableSourceTest extends TableTestBase {
 util.verifyPlan(sqlQuery)
   }
 
+
+  @Test
+  def testLegacyRowTimeTableGroupWindow(): Unit = {
+util.tableEnv.connect(new ConnectorDescriptor("TestTableSourceWithTime", 
1, false) {
+  override protected def toConnectorProperties: JMap[String, String] = {
+Collections.emptyMap()
+  }

Review comment:
   I don't think so. A dedicated descriptor for every custom table source 
is wasteful and should be avoid. Maybe we can use `CustomConnectorDescriptor` 
after we port it in FLINK-16029?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   >